#1 Philosophical objections to Artificial Consciousness (A*)
Even if fellow Australian and Flinders alumnus David Chalmers had not achieved philosophical infamy with his characterisation of C* as an epistemologically 'hard' problem, there is nonetheless an underlying 'too-hard-basket' attitude which is shared broadly, both in the philosophical community and society at large [1]. Chalmers does not seriously question the ontology of consciousness (C*). Instead, he raises 'reasonable' doubt about its epistemology, ie our (presumably collective) capacity to discover how biology does it. He publicly doubts that we can decode the transformation from C* (the natural consciousness of animals including humans) to A*, defined as computational processes (software emulation) isomorphic to C*, but running on a non-living substrate (hardware platform).
This discussion presents a solution to this problem. It 'distills' a 'denatured' set of canonical principles used by exemplars of C* , and then proposes a computer design based upon these principles. This design is based on a Strong-AI paradigm called GOLEM (Goal-Oriented Linguistic Emulation of Mind) developed by the author between 2012 and 2022.

Figure 1a The 'sandwich' model - (i) philosophical probity (ii) scientific endeavour (iii) technological enterprise.
Before Figure 1a:- diagram (ii) can be taken seriously, acceptance of teleology in science must become mainstream. This acceptance is at odds with the Behaviourist roots of modern psychological orthodoxy. The problem with Behaviourism is that it isn't a science. Science concerns itself primarily with finding the correct causes for the observed effects. The Behaviourists saw that behaviour was easy to measure, and said 'aha, we have an effect'. They broke behaviours down into building blocks called reflexes. A reflex is a learned association of pairing between a sensory stimulus (this must be the cause) and a motor response (this must be the effect). Complex animal and human behaviour is (obviously!) composed of (endless?) chains of reflexes. Easy as pie.
There is a glaring problem, which behaviourist theory successfully managed to brush under the rug for almost half a century. That problem is agency. In other words, how is the chain of reflexes started in the first place? By introspection, we know that our own behaviour is initiated by our conscious mind planning a behaviour that satisfies our most pressing need, eg hunger, pain, an overdue bill, an unhappy member of the family. We reduce needs by locating and retrieving those types of resources which satisfy our wants, for the most part. Even though every one of these thoughts, emotions and plans are part of our normal everyday reality, they weren't directly (ie physically) measurable, so they simply didn't exist in the Behaviourist scheme of things. Adult academics with tenure taught this stuff and post-doctoral students actually believed it [1].
Figure 1b. The diagram above depicts
1. But then a lot of very smart people thought Stalin was a good guy and communism was the answer to poverty in capitalist countries.
2. Libet, B.
3. TREISMAN, A., & GELADE, G. (1980). A feature-integration theory of attention.Cognitive Psychology,12, 97-136.Treisman et al's discovery used the term 'attention' rather than its more controversial, 'tenure-cidal' synonym, 'consciousness'.
4. I prefer to use the term 'resultant' to 'patient', as the matching term to 'agent'. A patient is a doctor's client who is, funnily enough, often quite impatient.
5. some robotic terminology is inevitable in a discussion of this type. Ideally, the proportion of biomedical terms will equal it, ie 50:50, since the aim of the discussion is the development of substrate-independent principles which can (i) explain 'in vivo' consciousness, then (ii) implement (emulate) 'in silico' machines which experience conscious states, ie have an internal, independently governed "mental" life.
6. Barrett, Lisa Feldman (2017). How Emotions are Made: The Secret Life of the Brain. New York: Houghton Mifflin Harcourt.
7.