#0 Introduction 

The following discussion represents the culmination of a research project whose aim is the design of a conscious computer, ie a machine which is capable of introspective thought. This work is an extension of the author's Flinders University Honours thesis, which was awarded in 2012. The GOLEM (Goal Oriented Linguistic Emulation of Mind) model is a development of the abstract state machine concepts outlined in this thesis. The GOLEM design contains data structures and computational processes which claim to be isomorphic to emotionality and perception in animals and humans. 

There is a difference between intellectual history and institutional history [20]. This discussion is strong on the former but weak on the latter. Reader, be warned. These are genuinely new ideas. However much I try to lend credence to them by anchoring them with reputable works, at the end of the day, they must be allowed to stand (or fall) on their own merits- see [21] for some eye-opening limitations on the scientific method.       

Until recently, this topic (ie so-called '''strong-AI')  was deemed to be unscientific, since it relied upon a viable solution to the 'hard' problem of consciousness (C*) [10]. However, a more optimistic outlook has recently been adopted by some leaders in this field [11]. The concept of structured qualia [12] exhibits key features of the axiomatic framework used by GOLEM theory.  This framework can be best described by referring to one of GOLEM theory's key insights, the principle of memory-text equivalence which is described as follows:-
(i) we each understand what it is like to be a unique but cospecific [13] agent called the 'self', whose perceived waking location is usually near the physical centroidal axis of our body.
(ii) the neural mechanism that enables us to understand our current 'situation' (defined as a ontological-epistemological predicament) is precisely and identically the same as that which enables us to understand the meaning of the words we are currently reading from a document (or hearing from a narrator). 
(iii) Specifically, we understand the general use-case of the current word in our short term memory (STM) buffer by virtue of our learning the language in childhood (PLA), or later in the case of secondary language acquisition (SLA). Collective semantics, ie of the current sentence, is obtained by combinatorial addition of the semantics of all the individual words in the sentence [14]. The process is analogous to the set-theoretic logic used in Venn Diagrams. For example, the word 'table' in isolation has very many potential semantic 'matches'. It could be an item of furniture, or a 2-dimensional data list with rows and columns. However, when it is placed in the same sentence as other words- eg articles, adjectives, verbs - the list of likely matches is much smaller. The speaker/writer keeps adding words until they are satisfied that the list of local referents is small enough to match the required focal (ie situational-intentional) semantics. Notice that the semantics is combinational, not permutational.
(iv) The sentential semantics of each new utterance spoken must retrospectively match the semantics of the entire narrative, eg that of the growing body of text to which the current sentence retroductively refers. By the same token, the situational semantics of each new 'conscious snapshot' or perceptual field must retrospectively arise from (and retroductively be explained by) the animated list of iconic (short-term) memories ('our subjective experience') created during wakefulness.
(v) These mechanisms are identical, because they both result from the combinational nature of semantics. It is therefore evident that long-term memory and written text are isomorphic (= 'same shape') data structures, the former a natural result of evolutionary biology, the latter artificially constructed from non-living constituents. There is therefore no difference structurally (ie ontologically) between Block's [15] p-type and a-type classification of conscious experience. The only difference is procedural (ie epistemological), such that a-type consciousness makes semantic references which are in long-term memory (LTM) while p-type  consciousness makes semantic references to those short-term memories (STM) which constitute the current goal-oriented, behaviourally-defined predicament (called a Situation Image (SI) in GOLEM theory). 
(vi) Fred Pulvermueller [16] uses the term 'semantic grounding' to refer to the combination of retrospective (episodic) memory and retroductive (common-sense, conditionally justified expectations aka 'belief system') knowledge encoded in the brain's neurosemantic heterarchies [19].
(vii) Naccache [17] also reports the same contradiction in Block's concept of Phenomenal Consciousness (p-C*) as pointed out by GOLEM theory- that p-C* needs Access Consciousness (a-C*) to work, ie that it is a 'front end' (like a GUI) for a-C*'s 'back end'. In other words, p-C* and a-C* conform to the software engineering concept of a client-server architecture. This is implied in GT's associating p-C* with STM (ie analogous to buffer or 'RAM' storage in our mind). Even though we might attend to an item in STM, we must necessarily associate its semantics by reference to LTM, ie we must use a-C* context to understand p-C* content. This means that true consciousness, ie the kind we deploy when we enter focal (attentional) mind states, relies on the semantic access which only a-C* possesses. While it may be true that perceptual contents of awareness (compare locus/local vs focus/focal) can occupy p-C*, we need to access a-C* resources to combine features [18] we are merely aware of into semantically complete, and fully conceptualised items, eg such as physical objects we focus on in order to manually (or, indeed mentally) manipulate in pursuit of our currently active goals.

Figure 0a - GOLEM biologically plausible computational architecture (BICA)

This research sought a non-ad hoc explanation for lateral asymmetry of cognitive function in animals and humans. Language production in humans is based in the left cerebral hemisphere, although other regions of the brain also provide contributions. The hypothetical reason for this lateralised functional specialisation is that the local pattern of input/output information within each of the brain's hemispheres is reproduced as a global I/O pattern of higher order representations (HOR) between the brain's hemispheres (see Figure 0a). 

According to Peirce's ( 'Peirce' is pronounced "Purr-s" not "Peer-s") principle of retroductive inference, the best choice of scientific hypothesis occurs when the predictions it yields are 'unremarkable', ie seem to conform to one's common-sense knowledge of the observations (both data and errata). This condition has been satisfied in the case of the GOLEM model, It therefore fulfils the aim of the research, and suggests that a computer based on these ideas could be made.

Hypotheses 'all the way down'

The scientific literature is full of hopeful souls who obviously believe that by throwing massive resources at 'bottom-up' investigations to problems, they will gain the advances in knowledge they seek. But, unfortunately, it isn't always so [3]. Massive 'bottom-up' experiments run the risk of drowning in a tsunami of confusing data, which at the end of the day must be made sense of. It is better to adopt a 'top-down' approach from the start, by proposing a series of increasingly complex models, each one of which has been tested and approved before moving on to the next one. One must follow the best available hypotheses 'all the way down' [4].

The first tentative steps toward an adequate biology of cognition were made, first by pioneering semiotician Jakob Von Uexkull, in the 1930's, and then later by psychologist Bill Powers [5], in the 1970's [6]. Uexkull's concept of the 'circreis' and Powers' concept of 'perceptual control theory' (PCT) are both depicted in Figure 1 below.  A broken red line around Uexkull's 'circreis' indicates the internal world, or 'innenwelt'. Outside the red line is the region Uexkull named 'umbegung'. The blue Goal square (G) with the red guidance vector pointing from the Goal G to the target T has a special name in Uexkull's ontology - it is called the 'umwelt', and has a special status- it is neither wholly internal, nor wholly external. It is what GOLEM theory calls intrasubjective. 

Uexkull.         GOLEM Theory
umwelt       => intrasubjective
umbegung => intersubjective (aka 'objective')
innenwelt   => infrasubjective

In GOLEM theory, every animate system is subjective. There are two types of subjective system, those which exhibit intelligence, and those which experience consciousness. Living cells are intelligent, because they can only 'understand' serial input streams (DNA, RNA, receptor excitations etc). So are game playing programs like AlphaGo and question answering programs like Watson. All complex creatures are conscious because they must understand parallel (concurrent, multifactor) event fields. They must not only decide what sets of events are related, but also which ones are independent inputs, and which ones are dependent- ie which inputs are partly due to its own outputs. In the case of AlphaGo the space of possible next moves is admittedly huge, but the distinction between self moves and opponent moves is a simple matter of two-handed time slicing. In the case of Watson, it understands lots of very subtle linguistic tricks, customs and obtuse references, the sort that only members of the digiterati (computer-savvy hipsters)[7]

Both Uexkull and Powers share the idea that living creatures are successful creators of their own behaviour even though (or perhaps, because) they have limited access to external information. Uexkull uses the example of the flea, who only needs to know about a couple of things- ground vibrations (steps of an approaching animal), and heat radiation (from the creature's blood vessels). The former stimulus causes the flea to jump, the latter stimulus causes it to insert its proboscis into the creature's vein. Although the mind of the flea is simple, it is 'top-down', ie 'high-level'. Powers' PCT experiments have demonstrated that an organism does not directly control (the term 'govern' is preferred) its own behaviour, nor external environmental variables, but rather, exerts indirect governance of its behaviour by regulating its own perceptions of those variables. 

The internal cortical processes of the primate brain operate on a flow of abstract information, otherwise known as mental representations, with a relatively weak connection to actual objects of the external world [22]. The advanced information processing that occurs in the primate brain is similar to that which occurs in the flea's brain, in that it is 'abstract' or 'high-level'. It seems that the brain of every creature, great or small, has similar teleology to every other brain, but of an informational complexity comparable to the creature's overall size. Without the concept of teleology, the idea behind this statement could not be easily expressed. Indeed, without the concept of teleology (ie overall purpose, or 'top-down' function), any attempt to liken the brain of a flea to the brain of a person would be met with a level of doubt bordering on ridicule. 


Figure 0b - Uexkull's 'circreis' at left; Powers' PCT frame is illustrated at right, using the target acquisition mechanism from a guided missile, eg a SAM like a 'Stinger' or an AAM like a 'Sidewinder'. The blue square is the current goal-oriented state of the vehicle, the red arrow is its guidance state vector. A total of four homeostats are needed to generate this guidance vector, a pair of positive and negative unidirectional homeostats for each of the x-coordinate (bearing) and y-coordinate (azimuth). All cybernetic logic is internal and subjective.


Powers' PCT identifies six hierarchically arranged levels of feedback regulation in humans (and animals, presumably). Each (n+1)th level provides the setpoint or 'regula' (latin word meaning 'rule') signal for the next lower nth level. Powers makes the insightful observation that each nth level setpoint value represents the simplest possible form of a hypothesis for nth level behaviour, since the simplest formula for variable y is {y = constant}. He adds the following qualification- that dynamics are produced by slow enough setpoint variations.

The importance of these basic cybernetic insights is that all the computations, whether analogue or digital, are subjective. This is what William James would have called 'common coding' - the outputs are coded in terms of inputs; the common output code (PCC) is perceptual in this case. No forces are computed. Computations use positions (angles) only. Common Coding lies at the heart of the computational framework used by structural engineers. There are very few structural elements which are 'statically determinate', ie for which there is a unique solution that can be found by Newton's laws. Most real-world structures are composed of members that are 'structurally indeterminate'. The solutions for the forces (live loading functions) in loaded members are needed to compute stresses. But to get the desired force matrix, the displacement matrix must first be computed. Usually, iterative algorithms are used to 'zero in' on the solution set. The program is stopped when the determinant of the error matrix is small enough [8].

Figure 0c - Diagram which illustrates the concept of  Positional (aka Perceptual, or Displacement) Common Coding. To assess the value of any function, we measure the effect it has, in particular, we compare effect to cause, eg as a simple ratio. However, the roles of cause and effect are swapped when comparing the objective (ie INTERsubjective) case to the subjective (ie INTRAsubjective) case. In the subjective (lower diagram) case, to measure the effect, we must use Perceptual codes (displacements) not motor codes (forces). 

Most importantly, for all living creatures, and as we shall see, also for non-living cognates, common (usually perceptual) output coding is the 'sine qua non'.  Apart from those rudimentary computations performed at the musculoskeletal extrema of the PNS [9],  all neural circuits code for linear or angular limb positions, or a higher order combination of those positions or angles. 

Finally, we can present the main point of this section, which is that all low level computations are subjective. Since high level computations are combinations of low level ones, ergo, all high level computations, including those which generate consciously experienced emotional and perceptual states, are also subjective. Subjectivity does not need to be introduced into the discussion in an ad hoc manner. It is always there, at all of the various levels. But we know this by introspection - we can be conscious of our current state of being, from low level pains and pleasures to high level ones. 

Two types of memory are needed by complex systems- buffer memory to store records of short-term (eg daily) events, and archive memory, to store long-term memories. It is the limbic axis, consisting of the amygdala and hippocampus, which must sort out what information is important enough to warrant its permanent encoding. 

The mind is able to operate at a low level, ie subconsciously activate instinctive reflexes to reduce aches and avoid pains, as well as being able to use high-level, conscious focus to strategically plan much more complex, goal-oriented behaviour. One of these high-level conscious capacities is its ability to 'travel in time'. It can create past memories as well as generate future beliefs. For example, it...
* records contemporaneous percepts to form episodic (object and time cued) memories
** processes episodic records to obtain relocatable narrative (subject and topic cued) memories
*** processes narrative memories to create reusable knowledge- propositional states like 'private' beliefs and 'public' facts.

These mechanisms are covered in considerable detail in www.your-brain.webnode.co.uk and www.golem-theory.webnode.com . However, the specific focus of this discussion is the abstract nature of consciousness. It is therefore important that we place precise limits on our ability to voluntarily control the focus of our consciousness. In the next section, the interplay between our emotions and our perceptions is investigated.

1. see Philpapers.org

2. With retroduction, the pre-condition that there are no ad hoc decisions is an important one, to eliminate deliberate, superficial mimicry.

3. see Karl Popper.

4. the Ayurvedic prophet claimed the world rested on the back of a truly enormous elephant. His initial answer to the inevitable question 'but what does the elephant stand on' was that there was a huge turtle under each foot.  When asked what do those turtles stand on, his dismissive response was 'of course, its turtles all the way down'.

5. Powers, William. T., Clark, R. K., and McFarland, R. L. (1960). "A general feedback theory of human behaviour [Part 1; Part 2]. Perceptual and Motor Skills 11, 71-88; 309-323.

6. There were many other contributors, of course, such as Ashby, Weiner, Bush etc. I have chosen Uexkull and Powers to suit my narrative arguments.

7. When another 'ancient geek' asked me what I did, I said, utterly without giving it any thought, that I was a 'Biohacker'. Like Popeye says, I yam what I yam. 

8. No-one designs structures 'by hand' anymore. A 'Finite Element' or 'Boundary Element' method is used, implemented by specialist software applications.

9. PNS = peripheral nervous system. CNS = central nervous system. ENS = enteric nervous system and so on, except where acronyms are ambiguous.

10. Chalmers, D. (1995). Facing up to the problem of consciousness. J. Conscious. Stud. 2, 200-219.

11. Smith, D.H. & Schillaci, G. (2021) Why Build a Robot with Artificial Consciousness? A Cross-Disciplinary Dialogue. Frontiers in Psychology: Conceptual Analysis

12. Loorits, K. (2014) Structured Qualia: A solution to the hard problem of consciousness. Frontiers in Psychology: Hypothesis & Theory Article. 

13. an individual exemplar of its biological species, eg a unique human, one with shared existential/ ontological properties but with singular experiential/ epistemological properties

14. It is directly provable from electrophysiology (mental functions each have a 'signature' Event Related Potential, eg N60) that semantics is computed BEFORE syntax, and therefore cannot be caused by it, ie semantics simply cannot  rely on syntax in the systematic manner touted by uberlinguist Noam Chomsky.  Orthosyntactic (ie grammatical) rules of sentence generation are, instead, an infrastructural aide-memoire, introduced into childhood education to create an intelligent society whose members are all similarly informed, psychologically equipotent, and hence (it is hoped) disposed more kindly to one another, as per the republican mantra 'liberte, egalite, fraternite'.

15. Block, N. (2005) 'Two Neural Correlates of Consciousness'. Trends in Cognitive Sciences, 9(2), pp. 46-52.

16 Pulvermüller, F. (1992) Constituents of a neurological theory of language. Concepts in Neuroscience 3:157-200.

17. Nacache, L. (2018) Why and How Access Consciousness can account for Phenomenal Consciousness. Phil Trans R Soc B 373.

18. Treisman, A & Gelade, F. (1980)   A feature-integration theory of attention. Cognitive Psychology Volume 12, Issue 1, January 1980, Pages 97-136

19. In GOLEM, the semantic heterarchy in the right side is embodied (intrasubjective), while the semantic heterarchy in the left side is situated (intersubjective)

20. This phrase, which succinctly describes this vital distinction, was (if my memory serves me well) brought to my attention by Margaret Boden, in her essential volume 'Mind  as Machine'

21. Collins, H. & Pinch, T.  (1993) What everyone should know about science. Cambridge University Press. 

22. Friedman, R. (2020) Themes of advanced information processing in the primate brain. AIMS Neuroscience, 7(4): 373-388. 


Create your website for free! This website was made with Webnode. Create your own for free today! Get started