a blog about things that I've been thinking hard about

The Brain – An Orchestra With A Conductor

6 February, 2006
orchestras have conductors, for a reason

Does the human brain have a "central" point of control?

Is the brain like an orchestra, with a conductor, or is it not like an orchestra?

Why do orchestras have conductors? Don't the musicians already know how to play the music?


In a "VIEWpoint" article "The Brain – An Orchestra Without A Conductor" in issue 2003/5 of the journal MaxPlanckResearch, Wolf Singer argues what might be called the anti-centralist position, that there is no physically central point of control in the human brain.

The article argues that our intuitions about our own brains are completely in error. However, the evidence that Singer gives in support of this claim is essentially negative: a central controller of the brain has not been found, therefore it does not exist; for certain reasons it is concluded that the representation of information in some parts of the brain must be unimaginably complex and incomprehensible, therefore all naive intuitions we have about what happens inside our brain (include the intuition that there is a central controller) are completely wrong.


Although the word "Orchestra" appears in the title of the article, there is no mention of orchestras in the article itself. But, the thing about orchestras is that they do have conductors, and there is a reason that they have conductors, which is that an orchestra has a top-level goal, which is to play pieces of music as well as possible. The optimal achievement of this top-level goal is the responsibility of the conductor. The orchestra contains many components, i.e. the players and their instruments, and each of these components behaves in a goal-oriented fashion, but only the conductor is in a position to oversee the achiement of the top-level goal.

If orchestras could function without conductors then there would be orchestras that didn't have conductors, but actually there aren't any, unless you consider very small "orchestras", such as chamber music ensembles, which only have, say, 4 players.

And as it happens, the human brain also has a top-level goal, which consists of the biological goal of optimising the long-term reproductive success of the brain's owner. So it would be surprising if there wasn't some specific part of the brain responsible for overseeing the achievement of this top-level goal.

In later parts of his article, Singer alludes to the existence of complex systems which he assumes have dynamics of similar complexity to that which he alleges the human brain to have. These systems include "social and political systems" and "economic systems and biotopes". But a critical feature of these systems is that they do not have top-level goals. Rather they consist of interacting components where each component has its own goals, and the dynamics of the systems reflect the interactions between the goal-oriented behaviour of each of these components.

My conclusion at this point is the following: all systems that have top-level goals whose operation we understand have points of central control, and systems which don't have points of central control do not have top-level goals.

There are also systems that have mechanisms of central control, even though they lack a clearly defined top-level goal. A classic example is that of a political democracy. The lives of citizens in a democracy are governed by a government which is itself chosen by the same citizens. Most people would regard this type of democracy as having some type of centrality, even though there is no exact point of central control. A typical Western liberal democracy has a number of components which are essential to its continued operation, including:

Each of these components occupies a distinct physical location, so there is no single location of central control. And there is no single point through which all information relevant to the control and operation of a democracy flows. The most critical point of information processing in a democracy is the counting and totalling of votes on election day. But by themselves, these totals don't tell much about what is happening in the democracy – they only have meaning within the context of the election, the choice of candidates and the information that the public has about what the candidates intend to do if elected to office and what the expected consequences of those intentions would be given the prevailing circumstances in that society.

Location, Location

Which brings me to the points that Singer makes in the first section of his article, that no single central point of control has been found through which all information about the operation of the brain flows.

To refute this suggestion, I need to introduce a plausible hypothesis about the form of a hypothetical centralised system of control for the brain, so that I can identify the various components of this hypothetical system with processing components known to exist in the brain.

My own current theory of consciousness as a centralised control system can be described in terms of the information processing steps that it performs:

  1. Identify a situation requiring conscious decision making.
  2. Propose possible response.
  3. Apply top-level strategies to evaluate desirability of possible response.
  4. If overall evaluation is positive, activate possible response.
  5. Observe consequence of the response.
  6. According to success of consequence, adjust weights of top-level strategies that voted for or against the possible response.

Each of these steps must take place in a different part of the brain, because that is how the human brain is observed to work – different areas process and represent different kinds of information. Some putative identifications can be made as follows:

These identifications are all tentative and incomplete, but I think they are enough to disprove claims that we know for sure that there is no identifiable mechanism of central control.


After discussing the lack of a central control mechanism, Singer moves on to discuss binding mechanisms. A fundamental fact about the brain is that different regions specialise in processing different types of information, and this cuts across our subjective experience of perceiving objects, where each object is described by information about its different attributes, such as colour, size, shape and motion, as well as more goal-oriented attributes, such as "ugliness", "scariness" etc.

The problem of object perception can be defined as a binding problem, i.e. how information about different attributes of an object is "bound" together into the perception of that object. But the problem can also be described as a separation problem, in the sense that the real issue is that of keeping values of information about the attributes of different objects separated from each other.

Whichever way you view the problem, I have no dispute with Singer's proposed solution (as supported by the evidence he quotes), that binding occurs by means of sychronisation of the oscillatory firing of different neurons. (Stated in terms of separation, we can say that neurons are separated if their oscillatory firing is not synchronised.) What I dispute is whether binding by synchronisation implies the incomprehensibility of how information is represented in the human brain.


After discussing the very interesting possibility that schizophrenia may be caused by failures in these synchronisation mechanisms, Singer talks about "highly abstract, spatially and temporally structured excitation pattern[s]", and movement through a "inconceivably high-dimension space".

He comes to the conclusion that he has proved that the representation of information in the human brain, where subject to this binding mechanism, must be of a form which is instrinsically incomprehensible to the very human who is trying to understand his or her own brain.

He then talks about "nonlinear" and "high-dimensional" processes, and claims that the human brain can't understand anything nonlinear.

Unfortunately I am not sure that Singer even properly understands what the term "nonlinear" means. My own understanding is that a function f is linear if f(x+y) = f(x) + f(y) and f(ax) = af(x) where a is a scalar, and it's non-linear if it is not linear.

To give a simple example of a "non-linear" system that most of us can understand, consider a system consisting of a house with a certain number of people in it. Let x equal 10 young men, let y equal 10 young women, and let f represent the passage of one month.

The "nonlinearity" of this situation, that f(x) + f(y) is not equal to f(x+y), arises from the expected interactions between the men and women. In other words, there won't be any pregnancies in f(x) or f(y), but there could be some in f(x+y).

Another example of non-linearity which is not too hard to understand is the operation of a neuron. A neuron is non-linear because its output is a function of whether the summed inputs exceed a threshold (this description is admittedly a simplification, but a more accurate description would be even more non-linear). The neuron is also "high-dimensional" with respect to its inputs, which typically number in the thousands, but not so high-dimensional with respect to the output, which consists of a single signal.

The neuron's output can be considered high-dimensional if we consider the possible train of output signals over a period of time, however, this dimensionality is reduced somewhat if the neuron is oscillating rhythmically, since its behaviour is then described by just two dimensions of phase and frequency.


The dimensionality of a neuron's outputs and inputs highlights a general fact about information processing in the brain (and in lots of other information processing systems), which is that information processing very often consists of extracting information represented in a very high-dimensional space of values, and reducing it to a representation in a very low-dimensional space of values.

To take a simple example, consider the famous "Halle Berry" neuron. When you look at a picture of a person, information about whether or not that person is Halle Berry is implicitly contained in the activation of neurons in the retina, since some patterns of activity correspond to the presence of Halle Berry, and other patterns don't. But this representation is not very useful, because the direct description of the sets of Halle Berry activity patterns and the sets of non-Halle Berry activity patterns is too complicated. The whole point of all the neural machinery that lies between the retinal neurons and the "Halle Berry" neuron is to reduce the "yes it is Halle"/"no it isn't Halle" distinction to the possible firing states of that one "Halle Berry" neuron.

This suggests a general principle of information processing in the brain, which is that the brain is always trying to convert information represented in a high-dimensional form into a lower-dimensional form. Features like population encoding mean that certain types of information are never completely reduced to representation in just one neuron (which would be fragile anyway), but in general we can assume that lower-dimensional representations are always more useful than higher-dimensional representations.

There are known examples of information processing systems that do involve the processing of highly multi-dimensional states into other highly multi-dimensional states, but ironically, given Singer's emphasis on non-linearity, these systems are all very linear.

Examples that come to mind are holography and error-correcting codes (which are linear mod 2).


Singer posits multi-dimensional non-linear processing without giving any specific details, and his "proof" is essentially negative, i.e. the processing isn't linear, and it isn't low-dimensional.

Seen this way, Singer's theory is really a mysterian anti-theory, and is similar to other mysterian anti-theories of the human mind, including the classic "soul" theory of the mind, and the more recent example of Roger Penrose's quantum non-computable computation theory. Where the classic theory supposes "non-material", and Penrose supposes "non-Turing-computable", Singer supposes "not low-dimensional" and "non-linear".

What makes a theory an anti-theory is its dependence on proofs of inexplicability or incomprehensibility, in contrast to normal scientific theories which are proved true (or at least possibly true) because they explain some phenomenon and allow it to be comprehended. An anti-theory is "mysterian", because the mystery is an essential part of the proof of the anti-theory's correctness.

Reductionism All The Way

The major alternative to these various mysterian theories is classic reductionism. Explanations of how the brain works are to be reduced to explanations of how individual neurons work, and explanations of the connections between neurons. Binding, which may involve synchronised oscillations, is not an entry into a mysterious high-dimensional computation system; rather it is just an evolutionary kludge to deal with the fact that each attribute is processed by the brain in one location, and sometimes the same attribute needs to be processed for different objects, so there needs to be some way to have separate groups of neurons in each location responding to different objects, while maintaining connectivity between groups of neurons in different areas responding to the same object.

The success of neural reductionism in explaining the operation of many brain areas suggests that all neural operations may be explained this way, with the implication that all neurons can be understood as "feature detectors", and conversely, that all "feature detection" is implemented by specific and not-too-large groups of neurons. If there is a central control system which decides what to do next and how to balance different priorities, then there must exist corresponding sets of neurons tasked with answering specific questions like "what should I do next?", and "is this suggestion of what to do next a good idea?" and "given the conflict between two priorities, which one is most important?", and so on.

Just because we don't yet know the meaning of every neuron in the human brain, it doesn't follow that there have to exist neurons whose meanings is incomprehensible, nor does it mean that there are meanings that are not represented in a comprehensible manner by a minimal set of neurons. Much progress in neuroscience consists of more and more detailed understanding of how information is represented in particular neurons and in particular groups of neurons, and there is no reason to suppose that this progress will not continue, until such point where we can account for the representation of those meanings which correspond to our "folk" intuitions about the central control mechanisms of the human mind.

Vote for or comment on this article on Reddit or Hacker News ...