I was delighted to discover the above video about a tiny fraction of the brain of a mouse thank to P.Z. Myers. He discusses the paper Saturated Reconstruction of a Volume of Neocortex. Cell 162(3):648-61. doi: 10.1016/j.cell.2015.06.054. and points out the futility of the approach to examining the brain in ultraminute detail if the hope of understanding the basic principles by which it works.
Interestingly the authors of the paper are having doubts about the approach and write:
Finally, given the many challenges we encountered and those that remain in doing saturated connectomics, we think it is fair to question whether the results justify the effort expended. What after all have we gained from all this high density reconstruction of such a small volume? In our view, aside from the realization that connectivity is not going to be easy to explain by looking at overlap of axons and dendrites (a central premise of the Human Brain Project), we think that this ‘‘omics’’ effort lays bare the magnitude of the problem confronting neuroscientists who seek to understand the brain. Although technologies, such as the ones described in this paper, seek to provide a more complete description of the complexity of a system, they do not necessarily make understanding the system any easier. Rather, this work challenges the notion that the only thing that stands in the way of fundamental mechanistic insights is lack of data. The numbers of different neurons interacting within each miniscule portion of the cortex is greater than the total number of different neurons in many behaving animals. Some may therefore read this work as a cautionary tale that the task is impossible. Our view is more sanguine; in the nascent field of connectomics there is no reason to stop doing it until the results are boring.
My own approach, which I am developing on this blog, is to start with the idea that the problem is so complex that it is best to assume that it is infinitely complex, and any attempt to discover all the possibilities is theoretically impossible. I follow the approach used by physicists who use an "ideal gas" model because there are far too many molecules to consider individually. Instead of an infinite number of identical gas molecules with a range of kinetic energies I consider an infinite number of identical neurons. Every neuron has the potential to link with every other neuron (just as any pair of molecules can collide in the ideal gas model) and these links vary in strength. The links act as a store for the patterns of information stored in the brain, and brain activity involves passing of electrical activity between neurons - and this activity may alter the strength of the links involved. Because the model is working in an infinite framework there is no limit to the maximum complexity of memories which can be stored, and because the strength of the links change with use no two brains can ever be expected to be identical - and each brain will dynamically change with time.
The strength of the "ideal gas" model is that, while it is not perfect, it provides a predictive framework by which the behaviour of real gasses can be judged. I would be the first to admit there are limitations to my "ideal brain" model but I believe its predictions about how brains might work, and how human brains evolved, could provide a framework for understanding how real brains actually behave. This would seem a far more effective approach than some of the very expensive research projects currently underway.