Useful Background Reading for NeuroData

by Neurostorm — on  , 

cover-image

in this post i highlight some of the most important books and articles for our group, and likely other groups. this list is certainly imperfect, so please comment about books/articles that you think are missing.

books as references:

  1. exploratory data analysis (tukey)
  2. testing statistical hypotheses
  3. theory of point estimation
  4. convex optimization (boyd & other dude)
  5. elements of information theory (cover & thomas)
  6. elements of statistical learning (tibs & co)
  7. robust statistics (huber)
  8. density estimation (silverman)
  9. storytelling with data: read this before making any publication quality figures
  10. writing science: read this before trying to write a paper

statistcs / pattern recognition / data science:

these articles are not the most cited articles, and not necessarily the first on a topic. however, each does an excellent job making a specific point that i believe is important to remember when doing data science.

  1. Model Based Clustering (Fraley & Rafterty, 2002): explains the relationship between density estimation (via gaussian mixture modeling) and clustering. still the best approach to cluster with medium-dimensional data, imho.
  2. model selection is arbitrary (George & Foster; 2000): demonstrates that all the different model selection criteria are in fact arbitrary special cases of a general prior with 2 parameters
  3. Energy Statistics review (Rizzo & Szekely, 2016): explains energy statistics that can perform a variety of tasks in high-dimensional data
  4. Random Forest (Breiman, 2001): introduced random forest practice and theory, still the best “black box” machine learning algorithm.
  5. GMRA (Allard, Chen, Maggioni; 2012): introduces and explains relationships between dictionary learning and machine learning, with theory and implementation to boot
  6. MLE for Misspecified Models (White, 1982): shows that MLE yields the minimum KL distance between the truth and the feasible region
  7. manifold learning is “just” kernel pca (Ham et al. 2004): links several different important manifold learning techniques
  8. knn: proves k-nearest neighbor is universally consistent regression
  9. grazing goat starves in high-dimensions: shows that our intuition in high-dimensions is way off
  10. approximate nearest neighbor review: well-written discussion on the original LSH paper and subsequent work, demonstrating that randomization is a very useful approximation
  11. MSE doesn’t work in >2 dimensions (Stein, 1960): the geometric reason that sparsity and other forms of regularization can help in finite samples
  12. lasso doesn’t work, even in true model: shows that the lasso path includes lots of false positives, even when there are no correlations and the signal is sparse, and therefore shouldn’t be trusted
  13. generalized linear models: showed that one can estimate many reasonable nonlinear regression functions with a sum of very simple nonlinear functions, though not used so much in practice these days, still a very important concept
  14. statistical pattern recognition (Jain, 2000): a wonderful review, shows the Trunk example that the optimal parameter/model with finite data is not necessarily the truth.
  15. imagenet classification via deep learning: shows that with lots of training data, flops, on images, deep learning dramatically outperformed previous methods
  16. Statistical modeling: The two cultures (with comments and a rejoinder by the author) explains the difference between machine learning and statistical modeling, but before the term “machine learning” was cool.
  17. Classifier Technology and the Illusion of Progress. In particular, it cites Hoadley, “Hoadley, in the same discussion, “coined a phrase called the ‘ping-pong theorem.’ This theorem says that if we revealed to Professor Breiman the performance of our best model and gave him our data, then he could develop an algorithmic model using random forests, which would outperform our model. But if he revealed to us the performance of his model, then we could de- velop a segmented scorecard, which would outperform his model.”

other people’s neurodata collection:

  1. array tomograph for collecting multispectral 3D gene expression maps, and Knowing a synapse when you see one for more discussion.
  2. CLARITY and iDisco:for seeing whole brains with fluorescence without physically sectioning
  3. serial EM for seeing large volumes of nanoscale neuroanatomy
  4. multimodal MRI for seeing in vivo non-invasive brain structure and function at millimeter scale
  5. calcium imaging for seeing whole brain activity in zebrafish, or a whole bunch of activity in other stuff
  6. behavior for characterizing “natural” behaviors and linking them to neurons.
  7. biomarkers in psychiatry stating that clinical psychiatrists do not yet utilize any brain imaging based biomarkers for any clinical diagnosis (as of july 2012).

our work:

  1. incommensurability phenomenon: shows that if you run PCA twice on two different samples of noisy data, you can get arbitrarily different results
  2. You say, I say: shows that graph invariants are test statistics, and therefore, we can determine which invariants are optimal for any test by thinking about them in these terms
  3. Graph Matching: Relax at your own risk: shows that we can use a convex analytic approx. to initialize a non-convex numerical approx, and get good results on an NP-hard problem, even when the convex approx. is provably bad
  4. Consitency of ASE: shows that spectral embedding yields consistent estimators of latent positions for random graph models, so then we can use them in typical machine learning algorithms for subsequent inference
  5. HSBM: one of the best statistical models of large graphs we know
  6. MGC: shows that we can use and estimate locality for certain exploitation tasks, improving upon previous dependence tests both theoretically and empirically
  7. FlashGraph: demonstrates the power of semi-external memory modeling to build algorithms on single node multicore machines that best cluster implementations with orders of magnitude more resources
  8. open connectome paper: explains basis of our spatial database
  9. mi-lddmm: the current best approach for nonlinear registration, imho

Comments