Difference between revisions of "SOCR News JMM DC Session 2021"

From SOCR
Jump to: navigation, search
(Speakers)
(Speakers)
Line 22: Line 22:
  
 
==Speakers==
 
==Speakers==
# ''To be finalized in September 2020 ...''
+
''To be finalized in September 2020 ...''
  
# [https://www.carolineuhler.com/ Caroline Uhler (MIT)]: ''Multi-Domain Data Integration: From Observations to Mechanistic Insights'' ([http://www.ams.org/amsmtgs/2247_abstracts/1163-62-32.pdf Abstract 1163-62-32])
+
* [https://www.carolineuhler.com/ Caroline Uhler (MIT)]: ''Multi-Domain Data Integration: From Observations to Mechanistic Insights'' ([http://www.ams.org/amsmtgs/2247_abstracts/1163-62-32.pdf Abstract 1163-62-32])
 
: Massive data collection holds the promise of a better understanding of complex phenomena and ultimately, of better decisions. An exciting opportunity in this regard stems from the growing availability of perturbation / intervention data (manufacturing, advertisement, education, genomics, etc.). In order to obtain mechanistic insights from such data, a major challenge is the integration of different data modalities (video, audio, interventional, observational, etc.). Using genomics and in particular the problem of identifying drugs for the repurposing against COVID-19 as an example, I will first discuss our recent work on coupling autoencoders in the latent space to integrate and translate between data of very different modalities such as sequencing and imaging. I will then present a framework for integrating observational and interventional data for causal structure discovery and characterize the causal relationships that are identifiable from such data. We end by a theoretical analysis of autoencoders linking overparameterization to memorization. In particular, I will characterize the implicit bias of overparameterized autoencoders and show that such networks trained using standard optimization methods implement associative memory. Collectively, our results have major implications for planning and learning from interventions in various application domains.
 
: Massive data collection holds the promise of a better understanding of complex phenomena and ultimately, of better decisions. An exciting opportunity in this regard stems from the growing availability of perturbation / intervention data (manufacturing, advertisement, education, genomics, etc.). In order to obtain mechanistic insights from such data, a major challenge is the integration of different data modalities (video, audio, interventional, observational, etc.). Using genomics and in particular the problem of identifying drugs for the repurposing against COVID-19 as an example, I will first discuss our recent work on coupling autoencoders in the latent space to integrate and translate between data of very different modalities such as sequencing and imaging. I will then present a framework for integrating observational and interventional data for causal structure discovery and characterize the causal relationships that are identifiable from such data. We end by a theoretical analysis of autoencoders linking overparameterization to memorization. In particular, I will characterize the implicit bias of overparameterized autoencoders and show that such networks trained using standard optimization methods implement associative memory. Collectively, our results have major implications for planning and learning from interventions in various application domains.
  
# [https://www.math.fsu.edu/People/faculty.php?id=1783 Tom Needham (Florida State University)]: ''Applications of Gromov-Wasserstein distance to network science'' ([http://www.ams.org/amsmtgs/2247_abstracts/1163-52-68.pdf Abstract 1163-52-68])
+
* [https://www.math.fsu.edu/People/faculty.php?id=1783 Tom Needham (Florida State University)]: ''Applications of Gromov-Wasserstein distance to network science'' ([http://www.ams.org/amsmtgs/2247_abstracts/1163-52-68.pdf Abstract 1163-52-68])
 
: Recent  years  have  seen  a  surge  of  research  activity  in  network  analysis  through  the  lens  of  optimal  transport.  This perspective boils down to the following simple idea:  when comparing two networks, instead of considering a traditional registration between their nodes, one instead searches for an optimal ‘soft’ or probabilistic correspondence.  This perspective has led to state-of-the-art algorithms for robust large-scale network alignment and network partitioning tasks.  A rich mathematical theory underpins this work:  optimal node correspondences realize the Gromov-Wasserstein (GW) distance between networks.  GW distance was originally introduced, independently by K. T. Sturm and Facundo M ́emoli, as a tool for studying abstract convergence properties of sequences of metric measure spaces.  In particular, Sturm showed that GW distance can be understood as a geodesic distance with respect to a Riemannian structure on the space of isomorphism classes of metric measure spaces (the ‘Space of Spaces’).  In this talk, I will describe joint work with Samir Chowdhury,in which we develop computationally efficient implementations of Sturm’s ideas for network science applications.  We also derive theoretical results which link this framework to classical notions from spectral network analysis.  
 
: Recent  years  have  seen  a  surge  of  research  activity  in  network  analysis  through  the  lens  of  optimal  transport.  This perspective boils down to the following simple idea:  when comparing two networks, instead of considering a traditional registration between their nodes, one instead searches for an optimal ‘soft’ or probabilistic correspondence.  This perspective has led to state-of-the-art algorithms for robust large-scale network alignment and network partitioning tasks.  A rich mathematical theory underpins this work:  optimal node correspondences realize the Gromov-Wasserstein (GW) distance between networks.  GW distance was originally introduced, independently by K. T. Sturm and Facundo M ́emoli, as a tool for studying abstract convergence properties of sequences of metric measure spaces.  In particular, Sturm showed that GW distance can be understood as a geodesic distance with respect to a Riemannian structure on the space of isomorphism classes of metric measure spaces (the ‘Space of Spaces’).  In this talk, I will describe joint work with Samir Chowdhury,in which we develop computationally efficient implementations of Sturm’s ideas for network science applications.  We also derive theoretical results which link this framework to classical notions from spectral network analysis.  
  
# [http://www.cs.utah.edu/~jeffp/ Jeff M. Phillips (Utah)]: ''A Primer on the Geometry in Machine Learning'' ([http://www.ams.org/amsmtgs/2247_abstracts/1163-52-52.pdf Abstract 1163-52-52])
+
* [http://www.cs.utah.edu/~jeffp/ Jeff M. Phillips (Utah)]: ''A Primer on the Geometry in Machine Learning'' ([http://www.ams.org/amsmtgs/2247_abstracts/1163-52-52.pdf Abstract 1163-52-52])
 
: Machine  Learning  is  a  discipline  filled  with  many  simple  geometric  algorithms,  the  central  task  of  which  is  usually classification.  These varied approaches all take as input a set of n points in d dimensions, each with a label.  In learning,the goal is to use this input data to build a function which predicts a label accurately on new data drawn from the same unknown distribution as the input data.  The main difference in the many algorithms is largely a result of the chosen class of functions considered.  This talk will take a quick tour through many approaches from simple to complex and modern,and show the geometry inherent at each step.  Pit stops will include connections to geometric data structures, duality,random projections, range spaces, and core sets.  
 
: Machine  Learning  is  a  discipline  filled  with  many  simple  geometric  algorithms,  the  central  task  of  which  is  usually classification.  These varied approaches all take as input a set of n points in d dimensions, each with a label.  In learning,the goal is to use this input data to build a function which predicts a label accurately on new data drawn from the same unknown distribution as the input data.  The main difference in the many algorithms is largely a result of the chosen class of functions considered.  This talk will take a quick tour through many approaches from simple to complex and modern,and show the geometry inherent at each step.  Pit stops will include connections to geometric data structures, duality,random projections, range spaces, and core sets.  
+
 
# [https://www.jonathannilesweed.com/ Jonathan Niles-Weed, NYU/Courant/Center for Data Science]: ''Statistical estimation under group actions'' ([http://www.ams.org/amsmtgs/2247_abstracts/1163-62-41.pdf Abstract 1163-62-41])
+
* [https://www.jonathannilesweed.com/ Jonathan Niles-Weed, NYU/Courant/Center for Data Science]: ''Statistical estimation under group actions'' ([http://www.ams.org/amsmtgs/2247_abstracts/1163-62-41.pdf Abstract 1163-62-41])
 
: A common challenge in the sciences is the presence of heterogeneity in data. Motivated by problems in signal processing and computational biology, we consider a particular form of heterogeneity where observations are corrupted by random transformations from a group (such as the group of permutations or rotations) before they can be collected and analyzed. We establish the fundamental limits of statistical estimation in such settings and show that the optimal rates of recovery are precisely governed by the invariant theory of the group. As a corollary, we establish rigorously the number of samples necessary to reconstruct the structure of molecules in cryo-electron microscopy. We also give a computationally efficient algorithm for a special case of this problem, and discuss conjectured statistical-computational gaps for the general case.
 
: A common challenge in the sciences is the presence of heterogeneity in data. Motivated by problems in signal processing and computational biology, we consider a particular form of heterogeneity where observations are corrupted by random transformations from a group (such as the group of permutations or rotations) before they can be collected and analyzed. We establish the fundamental limits of statistical estimation in such settings and show that the optimal rates of recovery are precisely governed by the invariant theory of the group. As a corollary, we establish rigorously the number of samples necessary to reconstruct the structure of molecules in cryo-electron microscopy. We also give a computationally efficient algorithm for a special case of this problem, and discuss conjectured statistical-computational gaps for the general case.
 
: Based on joint work with Afonso Bandeira, Ben Blum-Smith, Joe Kileel, Amelia Perry, Philippe Rigollet, Amit Singer, and Alex Wein.
 
: Based on joint work with Afonso Bandeira, Ben Blum-Smith, Joe Kileel, Amelia Perry, Philippe Rigollet, Amit Singer, and Alex Wein.
  
# [https://ani.stat.fsu.edu/~abarbu/ Adrian Barbu (Florida State University)]: ''A Novel Framework for Online Supervised Learning with Feature Selection'' ([http://www.ams.org/amsmtgs/2247_abstracts/1163-62-50.pdf Abstract 1163-62-50])
+
* [https://ani.stat.fsu.edu/~abarbu/ Adrian Barbu (Florida State University)]: ''A Novel Framework for Online Supervised Learning with Feature Selection'' ([http://www.ams.org/amsmtgs/2247_abstracts/1163-62-50.pdf Abstract 1163-62-50])
 
: Current  online  learning  methods  suffer  from  lower  convergence  rates  and  limited  capability  to  recover  the  support  of the  true  features  compared  to  their  offline  counterparts.  In  this  work,  we  present  a  novel  online  learning  framework based on running averages and introduce online versions of some popular existing offline methods such as Elastic Net, Minimax Concave Penalty and Feature Selection with Annealing.  The framework can handle an arbitrarily large number of observations as long as the data dimension is not too large,  e.g.  p<50,000.  We prove the equivalence between our online methods and their offline counterparts and give theoretical true feature recovery and convergence guarantees for some  of  them.  In  contrast  to  the  existing  online  methods,  the  proposed  methods  can  extract  models  of  any  sparsity level at any time.  Numerical experiments indicate that our new methods enjoy high accuracy of true feature recovery and  a  fast  convergence  rate,  compared  with  standard  online  and  offline  algorithms.  We  also  show  how  the  running averages framework can be used for model adaptation in the presence of model drift.  Finally, we present applications to large datasets where again the proposed framework shows competitive results compared to popular online and offline algorithms.
 
: Current  online  learning  methods  suffer  from  lower  convergence  rates  and  limited  capability  to  recover  the  support  of the  true  features  compared  to  their  offline  counterparts.  In  this  work,  we  present  a  novel  online  learning  framework based on running averages and introduce online versions of some popular existing offline methods such as Elastic Net, Minimax Concave Penalty and Feature Selection with Annealing.  The framework can handle an arbitrarily large number of observations as long as the data dimension is not too large,  e.g.  p<50,000.  We prove the equivalence between our online methods and their offline counterparts and give theoretical true feature recovery and convergence guarantees for some  of  them.  In  contrast  to  the  existing  online  methods,  the  proposed  methods  can  extract  models  of  any  sparsity level at any time.  Numerical experiments indicate that our new methods enjoy high accuracy of true feature recovery and  a  fast  convergence  rate,  compared  with  standard  online  and  offline  algorithms.  We  also  show  how  the  running averages framework can be used for model adaptation in the presence of model drift.  Finally, we present applications to large datasets where again the proposed framework shows competitive results compared to popular online and offline algorithms.
  
# [https://www.umich.edu/~dinov/ Ivo D. Dinov (University of Michigan)]: ''Data Science, Time Complexity, and Spacekime Analytics'' ([http://www.ams.org/amsmtgs/2247_abstracts/1163-62-33.pdf Abstract 1163-62-33])
+
* [https://www.umich.edu/~dinov/ Ivo D. Dinov (University of Michigan)]: ''Data Science, Time Complexity, and Spacekime Analytics'' ([http://www.ams.org/amsmtgs/2247_abstracts/1163-62-33.pdf Abstract 1163-62-33])
 
: Human behavior, communication, and social interactions are profoundly augmented by the rapid immersion of digitalization and virtualization of all life experiences. This process presents important challenges of managing, harmonizing, modeling, analyzing, interpreting, and visualizing complex information. There is a substantial need to develop, validate, productize, and support novel mathematical techniques, advanced statistical computing algorithms, transdisciplinary tools, and effective artificial intelligence applications. ''Spacekime analytics'' is a new technique for modeling high-dimensional longitudinal data. This approach relies on extending the notions of time, events, particles, and wavefunctions to complex-time (''kime''), complex-events (''kevents''), data, and inference-functions. We will illustrate how the kime-magnitude (longitudinal time order) and kime-direction (phase) affect the subsequent predictive analytics and the induced scientific inference. The mathematical foundation of spacekime calculus reveal various statistical implications including inferential uncertainty and a Bayesian formulation of spacekime analytics. Complexifying time allows the lifting of all commonly observed processes from the classical 4D Minkowski spacetime to a 5D spacekime manifold, where a number of interesting mathematical problems arise. Direct data science applications of spacekime analytics will be demonstrated using simulated data and clinical observations (e.g., sMRI, fMRI data).
 
: Human behavior, communication, and social interactions are profoundly augmented by the rapid immersion of digitalization and virtualization of all life experiences. This process presents important challenges of managing, harmonizing, modeling, analyzing, interpreting, and visualizing complex information. There is a substantial need to develop, validate, productize, and support novel mathematical techniques, advanced statistical computing algorithms, transdisciplinary tools, and effective artificial intelligence applications. ''Spacekime analytics'' is a new technique for modeling high-dimensional longitudinal data. This approach relies on extending the notions of time, events, particles, and wavefunctions to complex-time (''kime''), complex-events (''kevents''), data, and inference-functions. We will illustrate how the kime-magnitude (longitudinal time order) and kime-direction (phase) affect the subsequent predictive analytics and the induced scientific inference. The mathematical foundation of spacekime calculus reveal various statistical implications including inferential uncertainty and a Bayesian formulation of spacekime analytics. Complexifying time allows the lifting of all commonly observed processes from the classical 4D Minkowski spacetime to a 5D spacekime manifold, where a number of interesting mathematical problems arise. Direct data science applications of spacekime analytics will be demonstrated using simulated data and clinical observations (e.g., sMRI, fMRI data).
  

Revision as of 12:44, 7 August 2020

SOCR News & Events: 2021 JMM/AMS Special Session on Foundations of Data Science: Mathematical Representation, Computational Modeling, and Statistical Inference

Overview

The volume, heterogeneity, and velocity of digital information is increasing exponentially and faster than our ability to manage, interpret and analyze it. Novel mathematical algorithms, reliable statistical techniques, and powerful computational tools are necessary to cope with the enormous proliferation of data in all aspects of human experiences. There are a number of mathematical strategies to represent, model, analyze, interpret and visualize complex, voluminous, and high-dimensional data. The talks in this session will present advanced and alternative mathematical strategies to handle difficult data science challenges using differential equations, topological embeddings, tensor-based, analytical, numerical optimization, algebraic, multiresolution, variational, probabilistic, statistical, and artificial intelligence methods. Biomedical, environmental, and imaging examples will demonstrate the applications of such mathematical techniques to longitudinal, complex-valued, complex-time indexed, and incongruent observations.


Organizer

Session Logistics

Speakers

  • To be finalized in September 2020 ...
Massive data collection holds the promise of a better understanding of complex phenomena and ultimately, of better decisions. An exciting opportunity in this regard stems from the growing availability of perturbation / intervention data (manufacturing, advertisement, education, genomics, etc.). In order to obtain mechanistic insights from such data, a major challenge is the integration of different data modalities (video, audio, interventional, observational, etc.). Using genomics and in particular the problem of identifying drugs for the repurposing against COVID-19 as an example, I will first discuss our recent work on coupling autoencoders in the latent space to integrate and translate between data of very different modalities such as sequencing and imaging. I will then present a framework for integrating observational and interventional data for causal structure discovery and characterize the causal relationships that are identifiable from such data. We end by a theoretical analysis of autoencoders linking overparameterization to memorization. In particular, I will characterize the implicit bias of overparameterized autoencoders and show that such networks trained using standard optimization methods implement associative memory. Collectively, our results have major implications for planning and learning from interventions in various application domains.
Recent years have seen a surge of research activity in network analysis through the lens of optimal transport. This perspective boils down to the following simple idea: when comparing two networks, instead of considering a traditional registration between their nodes, one instead searches for an optimal ‘soft’ or probabilistic correspondence. This perspective has led to state-of-the-art algorithms for robust large-scale network alignment and network partitioning tasks. A rich mathematical theory underpins this work: optimal node correspondences realize the Gromov-Wasserstein (GW) distance between networks. GW distance was originally introduced, independently by K. T. Sturm and Facundo M ́emoli, as a tool for studying abstract convergence properties of sequences of metric measure spaces. In particular, Sturm showed that GW distance can be understood as a geodesic distance with respect to a Riemannian structure on the space of isomorphism classes of metric measure spaces (the ‘Space of Spaces’). In this talk, I will describe joint work with Samir Chowdhury,in which we develop computationally efficient implementations of Sturm’s ideas for network science applications. We also derive theoretical results which link this framework to classical notions from spectral network analysis.
Machine Learning is a discipline filled with many simple geometric algorithms, the central task of which is usually classification. These varied approaches all take as input a set of n points in d dimensions, each with a label. In learning,the goal is to use this input data to build a function which predicts a label accurately on new data drawn from the same unknown distribution as the input data. The main difference in the many algorithms is largely a result of the chosen class of functions considered. This talk will take a quick tour through many approaches from simple to complex and modern,and show the geometry inherent at each step. Pit stops will include connections to geometric data structures, duality,random projections, range spaces, and core sets.
A common challenge in the sciences is the presence of heterogeneity in data. Motivated by problems in signal processing and computational biology, we consider a particular form of heterogeneity where observations are corrupted by random transformations from a group (such as the group of permutations or rotations) before they can be collected and analyzed. We establish the fundamental limits of statistical estimation in such settings and show that the optimal rates of recovery are precisely governed by the invariant theory of the group. As a corollary, we establish rigorously the number of samples necessary to reconstruct the structure of molecules in cryo-electron microscopy. We also give a computationally efficient algorithm for a special case of this problem, and discuss conjectured statistical-computational gaps for the general case.
Based on joint work with Afonso Bandeira, Ben Blum-Smith, Joe Kileel, Amelia Perry, Philippe Rigollet, Amit Singer, and Alex Wein.
Current online learning methods suffer from lower convergence rates and limited capability to recover the support of the true features compared to their offline counterparts. In this work, we present a novel online learning framework based on running averages and introduce online versions of some popular existing offline methods such as Elastic Net, Minimax Concave Penalty and Feature Selection with Annealing. The framework can handle an arbitrarily large number of observations as long as the data dimension is not too large, e.g. p<50,000. We prove the equivalence between our online methods and their offline counterparts and give theoretical true feature recovery and convergence guarantees for some of them. In contrast to the existing online methods, the proposed methods can extract models of any sparsity level at any time. Numerical experiments indicate that our new methods enjoy high accuracy of true feature recovery and a fast convergence rate, compared with standard online and offline algorithms. We also show how the running averages framework can be used for model adaptation in the presence of model drift. Finally, we present applications to large datasets where again the proposed framework shows competitive results compared to popular online and offline algorithms.
Human behavior, communication, and social interactions are profoundly augmented by the rapid immersion of digitalization and virtualization of all life experiences. This process presents important challenges of managing, harmonizing, modeling, analyzing, interpreting, and visualizing complex information. There is a substantial need to develop, validate, productize, and support novel mathematical techniques, advanced statistical computing algorithms, transdisciplinary tools, and effective artificial intelligence applications. Spacekime analytics is a new technique for modeling high-dimensional longitudinal data. This approach relies on extending the notions of time, events, particles, and wavefunctions to complex-time (kime), complex-events (kevents), data, and inference-functions. We will illustrate how the kime-magnitude (longitudinal time order) and kime-direction (phase) affect the subsequent predictive analytics and the induced scientific inference. The mathematical foundation of spacekime calculus reveal various statistical implications including inferential uncertainty and a Bayesian formulation of spacekime analytics. Complexifying time allows the lifting of all commonly observed processes from the classical 4D Minkowski spacetime to a 5D spacekime manifold, where a number of interesting mathematical problems arise. Direct data science applications of spacekime analytics will be demonstrated using simulated data and clinical observations (e.g., sMRI, fMRI data).

Resources

  • Slides/papers: TBD





Translate this page:

(default)
Uk flag.gif

Deutsch
De flag.gif

Español
Es flag.gif

Français
Fr flag.gif

Italiano
It flag.gif

Português
Pt flag.gif

日本語
Jp flag.gif

България
Bg flag.gif

الامارات العربية المتحدة
Ae flag.gif

Suomi
Fi flag.gif

इस भाषा में
In flag.gif

Norge
No flag.png

한국어
Kr flag.gif

中文
Cn flag.gif

繁体中文
Cn flag.gif

Русский
Ru flag.gif

Nederlands
Nl flag.gif

Ελληνικά
Gr flag.gif

Hrvatska
Hr flag.gif

Česká republika
Cz flag.gif

Danmark
Dk flag.gif

Polska
Pl flag.png

România
Ro flag.png

Sverige
Se flag.gif