#2 Emotions are mental causes, while Perceptions are mental effects 

The problem with Behaviourism is that it isn't a science. Science concerns itself primarily with finding the correct causes for the observed effects. The Behaviourists saw that behaviour was easy to measure, and said 'aha, we have an effect'. They broke behaviours down into building blocks called reflexes. A reflex is a learned association of pairing between a sensory stimulus (this must be the cause) and a motor response (this must be the effect). Complex animal and human behaviour is (obviously!) composed of (endless?) chains of reflexes. Easy as pie. 

There is a glaring problem, which behaviourist theory successfully managed to brush under the rug for almost half a century. That problem is agency. In other words, how is the chain of reflexes started in the first place? By introspection, we know that our own behaviour is initiated by our conscious mind planning a behaviour that satisfies our most pressing need, eg hunger, pain, an overdue bill, an unhappy member of the family. We reduce needs by locating and retrieving those types of resources which satisfy our wants, for the most part. Even though every one of these thoughts, emotions and plans are part of our normal everyday reality, they weren't directly (ie physically) measurable, so they simply didn't exist in the Behaviourist scheme of things. Adult academics with tenure taught this stuff and post-doctoral students actually believed it [1]. 

In GOLEM theory, the set of ideas that drives this discussion forward, the central axiom or hypothesis is this-
Emotions are mental causes and perceptions are mental effects. When in doubt, remember that any given cause happens before its effect/s. This framework is implied by PCT, in which the guidance vector must be generated before the motor commands can be sent to the 'end effector' [5], eg adjusting the angle of attack of a missile's tail fins. Figure 1 in the previous section contains a diagram of a missile guidance circuit, in which a red vector plays the role of the machine's emotional state (its motivation), while the blue field plays the role of the machine's perceptual state, namely its consciousness. This has two factors- its attentional focus (a point), foregrounded against its background state, its locus of awareness (an area). Libet's paradoxical set of experiments demonstrated the trouble that arises with experiments which do not have a clearly developed teleology (ie guiding principles and goal-directed purpose). Benjamin Libet [2] mistakenly assumed that the subject's consciousness was the causal agent, and was shocked to find that the true cause (which always precede its effects) was actually some unconscious process in the subject's brain that the subject was quite unaware of.

Although we won't dwell unnecessarily upon these institutional failures, we do need to analyse Libet's mistaken ideas in more detail, because that is always where the devils hide. The problem arose because the subject was unaware of the causal role played by her emotionality (her motivation). She (and everyone else including Libet) assumed that her consciousness was the same thing as (semantically equivalent to) her agency or 'free will'.  The hidden 'devil' is 'perceptual common coding' (PCC), which was working behind the scenes, sowing the seeds of confusion, harvested as conflict. 

Emotions are mental causes. If we adopt a two channel model of the brain, with an input channel and a separate output channel, emotions belong in the output channel, because like actions and motions, they are created by our voluntary desires (we construct wants from available choices) and thereby reduce the levels of our most urgent needs. But in PCC, output metrics are coded in terms of input metrics. This means emotions are converted into their equivalent perceptual state. Consider a typical percept consisting of several objects. Our brain must somehow rank these objects in order of their importance to the task at hand, namely planning our next behaviour. The way that emotions perform this task is that they rank the perceived objects in order of importance, namely, they separate foreground targets we must pay attention to, from those others which blend into the background and form our set of other goals in our awareness field.  Treismann et al [3] discovered that we preferentially attend to those things that are created by combinations of perceptual features. 

So, the summary of the argument is as follows- we need to identify all relevant combinations of each type of conscious and volitional state. Since volitional states are causal, they belong in the output channel. There are two types of volitional state- voluntary (global agency) and involuntary (local reflex). If they are agents, they act in a cybernetically feedforward manner- they are, in effect, commands, or imperative instructions. Perceptual states, however, though they form part of cause-effect chains, are not initial causes. They are effects, or resultants. There are two types of effect, or 'resultant' state, conscious and unconscious [4]. See Figure 2 below. 

Elizabeth Feldman-Barrett's [6] recent research has confirmed the causal role that emotions play in the formation of resultant perceptual states. She has not only shown that the brain produces a continuous chain of mental causes and effects, in the form of the interplay between emotions and memory states, but also demonstrated that emotions and memories are interchangeable, ie they play equivalent roles. This makes sense, since we associate attentional focus (shown by Treisman et al to require a combination of perceptual features) with the formation of short-term memories. 


Figure 2. The diagram above depicts a plot of the four possible combinations of agent and resultant states. By identifying four 'compass-like' points in the sets of state-wise combinations, the QOLEM scheme suggests a geography of mind. 


1. But then a lot of very smart people thought Stalin was a good guy and communism was the answer to poverty in capitalist countries. 

2. Libet, B. 

3. TREISMAN, A., & GELADE, G. (1980). A feature-integration theory of attention.Cognitive Psychology,12, 97-136.Treisman et al's discovery used the term 'attention' rather than its more controversial, 'tenure-cidal' synonym, 'consciousness'.

4. I prefer to use the term 'resultant' to 'patient', as the matching term to 'agent'. A patient (role) is a doctor's client who is often quite impatient (behaviour).

5. some robotic terminology is inevitable in a discussion of this type. Ideally, the proportion of biomedical terms will equal it, ie 50:50, since the aim of the discussion is the development of substrate-independent principles which can (i) explain 'in vivo' consciousness, then (ii) implement 'in silico' machines which emulate the experience of conscious states, ie have an internal, independently governed "mental" life. 

6. Barrett, Lisa Feldman (2017). How Emotions are Made: The Secret Life of the Brain. New York: Houghton Mifflin Harcourt.

7. 

Create your website for free! This website was made with Webnode. Create your own for free today! Get started