SOCR News ISI WSC DSPA Training 2021
SOCR News & Events: 2021 ISI/WSC Training and Education Bootcamp on Data Science and Predictive Analytics (DSPA)
- Dr. Dinov is a professor of Health Behavior and Biological Sciences and Computational Medicine and Bioinformatics at the University of Michigan. He is a member of the Michigan Center for Applied and Interdisciplinary Mathematics (MCAIM) and a core member of the University of Michigan Comprehensive Cancer Center. Dr. Dinov serves as Director of the Statistics Online Computational Resource, Co-Director of the Center for Complexity and Self-management of Chronic Disease (CSCD Center), Co-Director of the multi-institutional Probability Distributome Project, Associate Director of the Michigan Institute for Data Science (MIDAS), and Associate Director of the Michigan Neuroscience Graduate Program (NGP). He is a member of the American Statistical Association (ASA), International Association for Statistical Education (IASE), American Mathematical Society (AMS), American Association for the Advancement of Science (AAAS), and an Elected Member of the Institutional Statistical Institute (ISI).
- Date/Time: Wednesday & Thursday, June 16-17, 2021, 14.00-17.00, Central European Summer Time, CEST (UTC+2), 8:00-11:00 AM US-EDT.
- Registration: Registration Link, moderate registration fees apply.
- GoToMeeting: Webinar link.
- URL: Official ISI/WSC Course Website.
- Conference: 2021 ISI World Statistical Congress and WSC 2021 short courses.
- Session Format: Two daily sessions (3-hours each).
- Session URL: https://myumi.ch/erXm2.
This course will be based on a Data Science and Predictive Analytics (DSPA) course I teach at the University of Michigan. The training will provide intermediate to advanced learners with a solid data science foundation to address challenges related to collecting, managing, processing, interrogating, analyzing and interpreting complex health and biomedical datasets using R. Participants will gain skills and acquire a tool-chest of methods, software tools, and protocols that can be applied to a broad spectrum of Big Data problems.
Before diving into the mathematical algorithms, statistical computing methods, software tools, and health analytics, we will discuss a number of driving motivational problems. These will ground all the subsequent scientific discussions, data modeling, and computational approaches.
Assumed prior knowledge includes: Completed undergraduate study with quantitative STEM exposure, some quantitative training, programming experience, and high-level of energy and motivation to learn. Preinstalled R and RStudio on user local client computer.
This course is based on active-learning and integrates driving motivational challenges with mathematical foundations, computational statistics, and modern scientific inference.
The training aims to provide effective, reliable, reproducible, and transformative data-driven discovery supporting open-science.
Trainees will develop scientific intuition, computational skills, and data-wrangling abilities to tackle Big biomedical and health data problems. Instructors will provide well-documented R-scripts and software recipes implementing atomic data-filters as well as complex end-to-end predictive big data analytics solutions.
Upon successful completion of this course, participants are expected to have moderate competency in at least two of each of the three competency areas: Algorithms and Applications, Data Management, and Analysis Methods. Specifically, participants will get end-to-end R-protocols, gain ML/AI algorithm knowledge, explore data validation, wrangling, and visualization, experiment with statistical inference and model-free Machine Learning tools.
|Algorithms and Applications||Tools||Working knowledge of basic software tools (command-line, GUI based, or web-services)||Familiarity with statistical programming languages, e.g., R or SciKit/Python, and database querying languages, e.g., SQL or NoSQL|
|Algorithms||Knowledge of core principles of scientific computing, applications programming, API’s, algorithm complexity, and data structures||Best practices for scientific and application programming, efficient implementation of matrix linear algebra and graphics, elementary notions of computational complexity, user-friendly interfaces, string matching|
|Application Domain||Data analysis experience from at least one application area, either through coursework, internship, research project, etc.||Applied domain examples include: computational social sciences, health sciences, business and marketing, learning sciences, transportation sciences, engineering and physical sciences|
|Data Management||Data validation & visualization||Curation, Exploratory Data Analysis (EDA) and visualization||Data provenance, validation, visualization via histograms, Q-Q plots, scatterplots (ggplot, Dashboard, D3.js)|
|Data wrangling||Skills for data normalization, data cleaning, data aggregation, and data harmonization/registration||Data imperfections include missing values, inconsistent string formatting (‘2016-01-01’ vs. ‘01/01/2016’, PC/Mac/Linux time vs. timestamps, structured vs. unstructured data|
|Data infrastructure||Handling databases, web-services, Hadoop, multi-source data||Data structures, SOAP protocols, ontologies, XML, JSON, streaming|
|Analysis Methods||Statistical inference||Basic understanding of bias and variance, principles of (non)parametric statistical inference, and (linear) modeling||Biological variability vs. technological noise, parametric (likelihood) vs non-parametric (rank order statistics) procedures, point vs. interval estimation, hypothesis testing, regression|
|Study design and diagnostics||Design of experiments, power calculations and sample sizing, strength of evidence, p-values, False Discovery Rates||Multistage testing, variance normalizing transforms, histogram equalization, goodness-of-fit tests, model overfitting, model reduction|
|Machine Learning||Dimensionality reduction, k-nearest neighbors, random forests, AdaBoost, kernelization, SVM, ensemble methods, CNN||Empirical risk minimization. Supervised, semi-supervised, and unsupervised learning. Transfer learning, active learning, reinforcement learning, multiview learning, instance learning|
- Foundations of R
- Managing Data in R
- Data Visualization
- Linear Algebra & Matrix Computing
- Dimensionality Reduction
- Lazy Learning: Classification Using Nearest Neighbors
- Probabilistic Learning: Classification Using Naive Bayes
- Decision Tree Divide and Conquer Classification
- Forecasting Numeric Data Using Regression Models
- Black Box Machine-Learning Methods: Neural Networks and Support Vector Machines
- Apriori Association Rules Learning
- k-Means Clustering
- Model Performance Assessment
- Improving Model Performance
- Specialized Machine Learning Topics
- Variable/Feature Selection
- Regularized Linear Modeling and Controlled Variable Selection
- Big Longitudinal Data Analysis
- Natural Language Processing/Text Mining
- Prediction and Internal Statistical Cross Validation
- Function Optimization
- Deep Learning, Neural Networks
- Welcome and introductions
- Course logistics (please come prepared with access to Internet connected computers having local versions of R (statistical computing environment) and RStudio (graphical user interface and integrated development environment)
- Data manipulation and visualization
- Non-linear dimensionality reduction (UMAP & t-SNE)
- Supervised and Unsupervised, model-based and model-free prediction, regression, classification, and clustering
- Reticulation (Interoperability between R, Python, C/C++ and other languages)
- Role of optimization in AI/ML
- Activities and HTML5 demos.
- Course Flyer
- DSPA Wikipedia.
- DSPA Springer Page & SpringerLink (PDF Download).
- dspa.predictive.space & DSPA MOOC Canvas Site.
Translate this page: