I was delighted to discover the above video about a tiny fraction of the brain of a mouse thank to P.Z. Myers. He discusses the paper Saturated Reconstruction of a Volume of Neocortex. Cell 162(3):648-61. doi: 10.1016/j.cell.2015.06.054. and points out the futility of the approach to examining the brain in ultraminute detail if the hope of understanding the basic principles by which it works.
Interestingly the authors of the paper are having doubts about the approach and write:
Finally, given the many challenges we encountered and those that remain in doing saturated connectomics, we think it is fair to question whether the results justify the effort expended. What after all have we gained from all this high density reconstruction of such a small volume? In our view, aside from the realization that connectivity is not going to be easy to explain by looking at overlap of axons and dendrites (a central premise of the Human Brain Project), we think that this ‘‘omics’’ effort lays bare the magnitude of the problem confronting neuroscientists who seek to understand the brain. Although technologies, such as the ones described in this paper, seek to provide a more complete description of the complexity of a system, they do not necessarily make understanding the system any easier. Rather, this work challenges the notion that the only thing that stands in the way of fundamental mechanistic insights is lack of data. The numbers of different neurons interacting within each miniscule portion of the cortex is greater than the total number of different neurons in many behaving animals. Some may therefore read this work as a cautionary tale that the task is impossible. Our view is more sanguine; in the nascent field of connectomics there is no reason to stop doing it until the results are boring.
My own approach, which I am developing on this blog, is to start with the idea that the problem is so complex that it is best to assume that it is infinitely complex, and any attempt to discover all the possibilities is theoretically impossible. I follow the approach used by physicists who use an "ideal gas" model because there are far too many molecules to consider individually. Instead of an infinite number of identical gas molecules with a range of kinetic energies I consider an infinite number of identical neurons. Every neuron has the potential to link with every other neuron (just as any pair of molecules can collide in the ideal gas model) and these links vary in strength. The links act as a store for the patterns of information stored in the brain, and brain activity involves passing of electrical activity between neurons - and this activity may alter the strength of the links involved. Because the model is working in an infinite framework there is no limit to the maximum complexity of memories which can be stored, and because the strength of the links change with use no two brains can ever be expected to be identical - and each brain will dynamically change with time.
The strength of the "ideal gas" model is that, while it is not perfect, it provides a predictive framework by which the behaviour of real gasses can be judged. I would be the first to admit there are limitations to my "ideal brain" model but I believe its predictions about how brains might work, and how human brains evolved, could provide a framework for understanding how real brains actually behave. This would seem a far more effective approach than some of the very expensive research projects currently underway.
I would say the biggest problem with an "ideal brain" model is that no two neurons are the same. Broad assumptions about ideal gasses can apply to a wide range of systems, such as entire stars, but "complex" systems such as weather, biology, and (as a subset) the workings of the brain prevent mathematical reduction from, in my opinion, ever even conceptually being capable of bearing fruit. With a star, the behavior of individual atoms doesn't matter. With weather, well, there's a reason it's called a "chaotic" system and simulations are only as accurate as their fidelity. I suspect that a brain simulation could never work based on such a model and that, ultimately, we really will need to know every single possible interaction of every last aspect right down to the molecular (mercifully, no need for the quantum) to actually get to anything even close to accurate.
ReplyDeletePZ himself has said the same every time someone tries to come up with general mathematical "laws" that would "simplify" biology (the joke "assume a spherical cow" comes into play here). I won't say the problem is intractable, but I'll say that we'll all be dead long before it can be solved.
In the model I propose all neurons are identical in that they use the same protocol to interact with other neurons (whether they look the same under a microscope is irrelevant to the model) – but at any instant in time they are uniquely linked to other neurons and of course the nature of these links can dynamically change with time. This means that while every neuron is “the same” every neuron and its connections considered together is dynamically unique and your objection falls away, as the model allows for an infinite number of unique “neurons and network connections” which is surely complex enough to model a human brain.
ReplyDeleteThe power of the model is that it allows recursion – which means that it can handle patterns of (in theory) infinite complexity. In addition the model can morph so at the simplest level it is just a network of neurons recognising patterns in inputs. However it can also be viewed as a simple set processor in which groups of neurons correspond to real world concepts. A further morph and the model will support the kind of rule-based system the human brain needs to support sophisticated language, toolmaking, etc.
Animals are not good at toolmaking because the model also predicts that there is a limit to the intelligence of any animal because it doesn’t live long enough to learn to develop the necessary links. However if language develops to a point where significant cultural information can be quickly passed between generations there are two or three major tipping points which means a very rapid increase in capability with no need for any increase in brain capacity or major genetic change in the protocol. Human culture appears to have reached this tipping point about 150,000 years ago. It should be noted that the model also predicts that humans should have unreliable long term memories and suffer from confirmation bias. To get the maximum effect the human brain has developed in such a way as accept culture from charismatic leaders (to maximise speed of learning culture) which provides an opening for charismatic gods which claim to be all powerful and promise eternal life.
Your comments that simple models don’t work can be refuted by anyone who understands evolution. The survival of the fittest model is simple starting point, while the genetic code is also very simple – but both are excellent starting points for understanding the vast complexity of the natural world. In the same way my model provides a starting point to try and explain the information processing capabilities of both animal and human brains.
Of course if everyone believes there is no simple solutions and the brain evolved to follow the most charismatic experts, no one will ever look for a simple solution. In fact a key part of my research was carried in the 1970s and 80s but was axed because it could not get funding because it failed to conform with the leading AI experts of the time that simple systems would never work – see my earlier post “Algorithms aren’t everything.”