Introduction
Artificial Intelligence (AI) has rapidly advanced in recent years, demonstrating remarkable capabilities in various domains, from image recognition to natural language processing. However, creating a truly general problem solver that can mimic human cognition remains an elusive goal. To realize this aspiration, it is essential to explore foundational principles such as autopoiesis and the 4Es of 4E cognition, which propose a novel framework for understanding cognition and intelligence. This paper argues that incorporating autopoiesis and embracing the 4Es will be crucial for AI systems to transcend their current limitations and exhibit the functions of general problem solving as active agents (Maturana & Varela, 1980; Clark, 2008).
Autopoiesis, a concept developed by Maturana and Varela, refers to the self-organizing and self-maintaining nature of living systems. It posits that an organism continually produces and maintains itself, creating its own boundaries and identity. Similarly, for AI to approach the level of an agent, it should possess autopoietic characteristics, enabling self-regulation and self-determination. Such self-referential and self-sustaining abilities are fundamental for an AI system to engage in purposeful actions (Maturana & Varela, 1980).
Furthermore, the 4Es of 4E cognition—embodied, embedded, extended, and enactive—propose an alternative approach to understanding cognition beyond the traditional computational paradigm. Embodied cognition highlights the role of the body and its interaction with the environment in shaping cognitive processes. Embedded cognition emphasizes the significance of the environment as an integral part of cognition. Extended cognition explores how cognitive processes can be augmented and distributed across external tools and artifacts. Enactive cognition focuses on the reciprocal relationship between an agent and its environment, emphasizing the active role of the agent in shaping its own perception and understanding (Clark, 2008).
By incorporating autopoiesis and embracing the 4Es of 4E cognition, AI systems can move beyond mere information processing and engage with the world in a more human-like manner. This will pave the way for AI to become a genuine problem solver, capable of adapting to complex, dynamic environments and exhibiting behaviors that emulate consciousness.
This paper will delve into the potential implications of achieving such advanced AI capabilities. Then, we’ll explore possible future thresholds and moments of sea change in the advancement of AI, including their implications for society, science, philosophy, and spirituality. Finally, this essay will argue that AI will eventually be able to simulate human phenomenal consciousness but will never be phenomenally conscious.
The integration of autopoiesis and the 4Es in AI could lead to machines that not only surpass human cognitive abilities but also possess a deeper understanding of the human condition. As AI becomes more intertwined with our lives, their advancement raises profound ethical, social, and philosophical questions that require careful consideration.
Overview of 4E Cognition
Cognitive science has traditionally focused on the computational approach to understanding the mind, treating cognition as an information processing system. However, in recent years, an alternative framework known as 4E cognition has gained prominence. 4E cognitive science emphasizes the embodied, embedded, extended, and enactive aspects of cognition, providing a more comprehensive and ecological understanding of the mind. This section will explain what 4E cognitive science entails and delve into the four Es, discussing the significance and implications of each E.
Embodied Cognition: The First E
The first E of 4E cognition is embodied cognition. Embodied cognition recognizes the crucial role of the body and its sensory-motor interactions in shaping cognitive processes (Wilson, 2002). It argues that cognition is not solely a product of the brain but emerges from the dynamic interactions between the brain, body, and the surrounding environment. Sensorimotor experiences and bodily states influence perception, understanding, and problem-solving. For example, our understanding of concepts like “grasp” or “warmth” is intimately linked to our bodily experiences of manipulating objects and feeling temperature. Embodied cognition highlights the importance of bodily experiences in shaping cognitive representations and processes.
Embedded Cognition: The Second E
The second E of 4E cognition is embedded cognition. Embedded cognition asserts that cognitive processes are not confined to the boundaries of the individual but are intricately intertwined with the environment (Clark, 1997). The environment, including cultural and social contexts, is seen as an active participant in cognitive processes. The mind extends beyond the individual and includes external tools, artifacts, and social interactions that shape and support cognitive activities. For instance, the use of a calculator to perform complex mathematical calculations or the reliance on a notebook for external memory storage are examples of cognitive processes extended into the environment. Embedded cognition emphasizes the reciprocal relationship between the mind and the environment, highlighting the co-constitutive nature of cognition.
Extended Cognition: The Third E
The third E of 4E cognition is extended cognition. Extended cognition builds upon the idea of embedded cognition but emphasizes the active use of external resources as integral components of cognitive processes (Clark & Chalmers, 1998). It argues that the mind extends beyond the boundaries of the brain and the body through the integration of external tools and technologies. These external resources, known as cognitive artifacts, play a central role in problem-solving, memory, and decision-making. For instance, using a smartphone or a search engine to access information instantly augments our cognitive capacities. Extended cognition recognizes the distributed and dynamic nature of cognitive processes, encompassing both internal and external resources.
Enactive Cognition: The Fourth E
The fourth E of 4E cognition is enactive cognition. Enactive cognition emphasizes the active engagement and reciprocal relationship between an agent and its environment (Varela et al., 1991). It posits that cognition is not a passive reception of information but an ongoing process of active construction and sense-making. The mind is viewed as an embodied and situated entity that enacts its understanding of the world through its actions and interactions. Perception is not seen as a passive reception of stimuli but as a skillful, situated, and context-dependent process. Enactive cognition highlights the role of agency and autonomy in shaping cognition, underscoring the active contribution of the agent in constructing its own reality.
Types of “Knowing” and 4E Cognition
Knowledge plays a fundamental role in human cognition, shaping our understanding and interactions with the world. In the realm of cognitive science, various types of knowledge have been identified, each with its unique characteristics and implications. This section explores the four types of knowledge: propositional, procedural, perspectival, and participatory knowing. It also examines how these types of knowledge relate to the 4Es of 4E cognitive science, namely embodied, embedded, extended, and enactive cognition.
Propositional Knowing: Knowledge as Representational Content
Propositional knowing refers to knowledge expressed in the form of propositions or statements, representing factual information and beliefs (Stanovich, 2011). It is often associated with declarative knowledge and can be communicated through language or symbolic representations. Propositional knowing is closely tied to the computational view of cognition, which emphasizes information processing and symbolic manipulation. In the context of 4E cognitive science, propositional knowing aligns with the embedded and extended aspects, as it involves the use of external tools (e.g., written language) to store and communicate propositional knowledge.
Procedural Knowing: Knowledge of Skills and Procedures
Procedural knowing pertains to the knowledge of skills, procedures, and how to perform certain actions or tasks (Ryle, 1949). It involves the acquisition of motor skills, habits, and expertise through practice and experience. Procedural knowledge is often implicit and difficult to articulate explicitly. It is closely associated with embodied cognition, as it relies on sensorimotor experiences and bodily interactions with the environment. The body’s engagement and mastery of motor skills contribute to the development and application of procedural knowledge, aligning with the embodied aspect of 4E cognition.
Perspectival Knowing: Knowledge from Different Perspectives
Perspectival knowing refers to the knowledge gained through different perspectives, viewpoints, and subjective experiences (Gallagher, 2017). It emphasizes the contextual and situated nature of knowledge, recognizing that understanding and interpretation can vary depending on one’s perspective. Perspectival knowing encompasses the role of social and cultural factors in shaping knowledge, emphasizing the embedded aspect of 4E cognition. It recognizes that knowledge is not solely an individual endeavor but is influenced by the cultural and social contexts in which individuals are situated.
Participatory Knowing: Knowledge through Engagement and Interaction
Participatory knowing emphasizes knowledge that is obtained through active engagement, interaction, and embodied participation in the world (Thompson, 2007). It acknowledges that knowledge is not simply acquired passively but emerges through active and reciprocal engagements with the environment. Participatory knowing aligns closely with enactive cognition, as it highlights the role of agency and autonomy in shaping knowledge. Through active participation and interaction, individuals construct their understanding of the world and acquire knowledge that is tightly linked to their embodied and situated experiences.
Each type of knowledge contributes to our understanding of cognition from different angles, emphasizing the importance of representation, skills, perspectives, and active engagement. When viewed through the lens of 4E cognitive science, these types of knowledge align with the embodied, embedded, extended, and enactive aspects, highlighting the role of the body, environment, external tools, and active participation in shaping cognition.
Autopoiesis and 4E Cognition
Autopoiesis, a concept introduced by Maturana and Varela in 1980, has gained significant attention in the field of cognitive science for its potential in explaining the self-organizing nature of living systems. This section explores the concept of autopoiesis and its relationship to the 4Es of 4E cognition. It argues that autopoiesis provides a foundational framework for understanding cognition and aligns closely with the 4E perspective, which has critical implications for the advancement of AI as a general problem solver.
Understanding Autopoiesis
Autopoiesis describes the self-generative and self-maintaining nature of living systems, in which the components of the system continuously produce and reproduce themselves (Maturana & Varela, 1980). The central idea is that an autopoietic system operates through a network of processes that enable it to maintain its own boundaries, identity, and organization. These processes involve the constant exchange and transformation of matter and energy, while the overall structure of the system remains intact. Autopoiesis highlights the intrinsic capacity of living systems to autonomously regulate their internal states and adapt to their environments.
Autopoiesis and the 4Es of 4E Cognition
Autopoiesis aligns with embodied cognition, the first E of 4E cognition, which emphasizes the fundamental role of the body in shaping cognitive processes. The body serves as the locus of sensorimotor interactions with the environment, influencing perception, action, and cognition. Autopoiesis highlights the embodied nature of cognition, as it emphasizes the bodily basis of self-regulation and self-maintenance.
The second E of 4E cognition, embedded cognition, recognizes the inseparable relationship between cognition and the environment. Autopoietic systems are intrinsically embedded in their environments, continuously interacting with and adapting to their surroundings. The processes of self-maintenance and adaptation in autopoiesis are intricately linked to the environmental context. The environment provides the necessary resources and constraints for the autopoietic system to function and thrive. Autopoiesis underscores the embedded nature of cognition, as it demonstrates the interdependence between an organism and its environment.
Autopoiesis also aligns with the extended cognition perspective, the third E of 4E cognition. Extended cognition emphasizes the incorporation of external tools and artifacts into cognitive processes. Autopoietic systems, while self-generative and self-maintaining, can also utilize external resources to support their autopoietic processes. For instance, organisms may use tools to manipulate their environments or rely on social interactions for information exchange and learning. Autopoiesis highlights the potential integration of external resources in cognition, reflecting the extended nature of cognitive processes.
Enactive cognition, the fourth E of 4E cognition, emphasizes the active engagement and reciprocal relationship between an agent and its environment. Autopoiesis aligns closely with enactive cognition, as it emphasizes the active nature of self-maintenance and adaptation. Autopoietic systems actively regulate their internal states in response to environmental perturbations, maintaining their organization and integrity. The enactive perspective acknowledges that cognition is not merely a passive reception of information but an active process of sense-making and interaction with the world. Autopoiesis embodies the enactive nature of cognition, as it demonstrates the active construction and ongoing self-regulation of an autopoietic system in relation to its environment.
By integrating the concept of autopoiesis into the framework of 4E cognition, we gain a deeper understanding of the fundamental processes underlying cognitive systems. This holistic approach allows us to explore cognition as a dynamic, self-generative, and contextually embedded phenomenon. Further research and exploration of the relationship between autopoiesis and the 4Es of 4E cognition can contribute to a more comprehensive understanding of cognition and its manifestations in both biological and artificial systems.
Relevance Realization, Predictive Processing, and 4E Cognition
Relevance realization is a concept that has garnered attention in cognitive science, particularly in the context of understanding the nature of cognition and its relationship to the 4Es (Embodied, Embedded, Extended, and Enactive) and predictive processing frameworks. This section aims to explain relevance realization and its significance in cognition, as well as its connection to 4E cognition and predictive processing. It argues that relevance realization provides a framework for understanding how cognition dynamically selects and processes information in a way that aligns with the principles of 4E cognition and predictive processing.
Understanding Relevance Realization
Relevance realization refers to the cognitive process through which organisms extract, perceive, and assign significance to relevant patterns of information in their environment (Friston, 2010; Vervaeke, 2017). It involves the capacity to identify and prioritize salient information based on its relevance to one’s goals, needs, and context. Relevance realization allows organisms to filter and process incoming sensory data in a way that optimizes adaptive behavior and decision-making. It is an active and dynamic process, influenced by an individual’s embodied experiences, situatedness, and goals.
Relevance Realization and 4E Cognition
Relevance realization aligns with the embodied aspect of 4E cognition, as it acknowledges the fundamental role of the body in shaping cognitive processes. Embodied experiences and sensorimotor interactions provide the basis for relevance realization, as they contribute to the formation of embodied knowledge and influence the interpretation and meaning assigned to incoming information. The body’s involvement in relevance realization highlights its inseparable relationship with cognition and emphasizes the importance of embodied experiences in shaping perception and understanding.
The embedded aspect of 4E cognition is also intertwined with relevance realization. The process of relevance realization is embedded in a larger cognitive system that operates within a specific environment and cultural context. The surrounding environment provides the necessary cues and contextual information that aid in the identification and interpretation of relevant patterns. Relevance realization is influenced by cultural norms, social interactions, and the ecological dynamics of the environment. The embedded nature of cognition underscores the idea that relevance is not solely determined by internal processes but is shaped by the interaction between the individual and their environment.
Relevance realization aligns with the extended cognition perspective, which emphasizes the incorporation of external resources into cognitive processes. External tools, artifacts, and cultural practices play a role in supporting relevance realization. For example, language, diagrams, and other symbolic systems allow for the external representation and manipulation of information, aiding in the process of relevance realization. The integration of external resources extends the cognitive capacity of individuals and facilitates the identification and processing of relevant patterns.
Relevance realization also relates to enactive cognition and predictive processing by highlighting the active and anticipatory nature of cognitive processes. Relevance is determined not only by the immediate sensory input but also by the predictions and expectations generated by the cognitive system. Predictive processing posits that the brain continuously generates predictions about incoming sensory data based on prior knowledge and models of the world. Relevance realization involves the dynamic interplay between top-down predictions and bottom-up sensory information, where the cognitive system actively selects and processes information that is deemed relevant based on the predictions and expectations generated.
Relevance realization plays a crucial role in cognitive processes by enabling organisms to extract and assign significance to relevant patterns of information in their environment. Its connection to 4E cognition and predictive processing provides a comprehensive understanding of how cognition operates in an embodied, embedded, extended, and enactive manner. Relevance realization underscores the dynamic and active nature of cognition, highlighting the interaction between an organism and its environment in the process of information selection and processing.
By incorporating the concept of relevance realization into the frameworks of 4E cognition and predictive processing, we gain deeper insights into the mechanisms underlying cognitive processes. This integrated perspective allows us to understand cognition as a dynamic and contextually situated phenomenon, where information selection and processing are influenced by embodied experiences, environmental context, and anticipatory processes.
Further research and exploration of relevance realization in relation to 4E cognition and predictive processing can contribute to a more comprehensive understanding of cognitive phenomena. By examining how relevance is assigned and how it shapes perception, attention, and decision-making, we can gain valuable insights into the adaptive nature of cognition and its implications for various domains, including psychology, neuroscience, and artificial intelligence.
Reality as a Language: The Read-Write Functionality of Cognition
The relationship between reality, perception, cognition, and language has been a subject of philosophical inquiry and scientific investigation. This section aims to compare the structures of reality, perception, cognition, and language in order to argue that reality can be understood as linguistic, and cognition can be conceptualized as a read-write functionality. In that way, reality is intelligible to us, because there is an isomorphism between the syntaxes of our languages, our perception, our cognition, and reality itself (Santos, 2023).
Reality as Linguistic
The nature of reality has long been debated, with different philosophical perspectives offering diverse interpretations. However, a linguistic understanding of reality posits that our perception and comprehension of the world are inherently mediated through language. Language acts as a framework through which we construct meaning and make sense of our experiences (Searle, 1995).
According to linguistic relativity theory, language shapes our thoughts and perceptions, influencing how we categorize and interpret the world (Whorf, 1956). Our conceptualization and understanding of reality are filtered through the linguistic structures available to us. Thus, language plays a fundamental role in constructing our reality by providing a system of symbols and concepts through which we interpret and communicate our experiences.
Perception and cognition both have structures that utilize tokens, symbols, associations, arrows of time (tense), etc. That is, their structure is isomorphic to the syntaxes of our natural and formal languages (Santos, 2023). While this isomorphism is empirically evident, it is also logically necessary. Without it, reality would not be intelligible to us, and in that case, we would not have been able to survive within it, let alone develop technology that achieves real results by manipulating reality.
Perception as Reading the Language of Reality
Perception, therefore, can be viewed as the process of “reading” the language of reality. Our senses provide us with sense data, which serve as the input that our cognitive processes interpret and make meaning of (Gibson, 1966). Just as language comprehension involves decoding symbols and extracting meaning, perception involves decoding the sensory information received from the environment.
The sensory input, such as visual, auditory, or tactile stimuli, is processed by our cognitive faculties, which extract patterns, detect objects, and infer their properties. This process can be seen as analogous to reading and understanding the language of reality, where the sensory data are the linguistic symbols that we interpret and derive meaning from (Pylyshyn, 1999).
Cognition as Read-Write Functionality
Cognition encompasses various mental processes, including perception, memory, reasoning, and problem-solving. Building upon the analogy of reality as a language and perception as reading, cognition can be considered as a read-write functionality. It involves not only the reading and interpretation of the language of reality but also the active engagement and manipulation of this language through actions and behaviors.
Cognition allows us to make sense of the world by actively interacting with it, testing hypotheses, and refining our understanding. Our cognitive processes enable us to “write” back into the language of reality through our actions, which shape and influence our environment. This active engagement with reality through behavior and action completes the read-write functionality of cognition (Clark, 1997).
Comparing the structures of reality, perception, cognition, and language reveals an intertwined relationship. Reality can be understood as linguistic, with language shaping our comprehension and construction of the world. Perception can be viewed as the process of reading the language of reality, where sensory data are decoded and interpreted. Cognition, in turn, can be conceptualized as a read-write functionality, involving the active engagement with and manipulation of the language of reality through actions and behaviors. Far from being an internal biological mechanism occurring only in the mind, cognition is a conversation between an autopoietic agent and reality.
This perspective underscores the dynamic and interactive nature of our relationship with reality, emphasizing the role of language and cognition in shaping our understanding and engagement with the world. An autopoietic AI would need to perform this same read-write functionality, which is, in essence, the cumulative result of the 4Es, relevance realization, and the types of knowledge we’ve covered in previous sections.
Computationalism: A Partial View of Mind
Computationalism is a prominent theoretical framework in cognitive science that posits that cognitive processes can be effectively explained and simulated using computational models. This section aims to explain the core principles of computationalism and its implications for understanding the nature of mind and cognition.
Computationalism asserts that cognitive processes can be understood as computations—symbolic manipulations of information—performed by embodied systems, such as the human brain or artificial systems (Piccinini, 2010). According to this view, cognitive processes involve the manipulation of mental representations or symbols based on rules or algorithms. These computations can be described mathematically and executed by a computational system.
The theory’s key principles include:
- Representation and Symbol Manipulation: Cognitive processes involve the encoding and manipulation of information in the form of symbols, allowing for the transformation and manipulation of these symbols according to predefined rules or algorithms (Pylyshyn, 1984).
- Information Processing: Computationalism views cognition as information processing. Cognitive processes can be conceptualized as a series of computational operations that transform and transmit information. These operations involve input, storage, transformation, and output of information, and can be simulated or implemented in computational systems (Newell & Simon, 1976).
- Decomposability and Modularity: Computationalism suggests that cognitive processes can be decomposed into smaller, modular components. Complex cognitive phenomena can be understood by breaking them down into simpler computational operations and studying the interactions between these components (Fodor, 1983). This modular approach allows for the understanding and simulation of cognitive processes at a more granular level.
Not surprisingly, computationalism has been foundational to the development of AI. By viewing cognition as computational processes, researchers have been able to design AI systems that can perform tasks traditionally associated with human intelligence, such as natural language processing, problem-solving, and pattern recognition (Russell & Norvig, 2021).
It also provides a framework for building cognitive models that simulate and explain human cognitive processes. By specifying the rules, representations, and algorithms involved in a particular cognitive task, computational models can replicate and predict human behavior, providing insights into the underlying cognitive mechanisms (Anderson, 1990).
Computationalism has had a significant impact on the field of cognitive science, offering a theoretical framework that helps unify and explain diverse phenomena. It provides a common language and methodology for studying cognition, facilitating interdisciplinary research and collaboration (Thagard, 2018). It must be said that there has traditionally been conflict between the computationalist and 4E cognitive views of cognition.
Let’s place that within the context of the previous arguments regarding reality’s linguistic nature. Information is the currency of language; language carries information. Reality is, therefore, an information system. There is a through-line of isomorphism from the structure of reality to the syntaxes of perception, cognition, and natural and formal languages. All of them can be described with mathematics and treated as (sometimes vastly complex) algorithms.
As such, computation is the read-write functionality carried out by informational subsystems of the larger informational supersystem of reality. To that extent, it makes sense that our cognitive and perceptual functions, which enable us to “read” the language of reality and then “write” in that same language by acting back upon reality, are computational. Computation is what this functionality of nature looks like, which provides a way to reconcile the computationalist and 4E cognitive viewpoints.
As we’ll explore later, this view cannot, even in principle, account for phenomenal consciousness, but it does provide a framework through which to understand the read-write functionality of an embodied cognitive agent. A necessary implication (again, which we’ll explore later) is that AI can become a cognitive agent without being a conscious agent. The read-write functionality of computation does not require phenomenal consciousness.
Artificial General Intelligence
Artificial General Intelligence (AGI) represents the ambitious goal of developing intelligent systems that possess the ability to understand, learn, and perform a wide range of cognitive tasks at a level equal to or surpassing human intelligence. This section aims to explain what AGI seeks to be, encompassing its characteristics and aspirations.
Defining Artificial General Intelligence
Artificial General Intelligence refers to the development of machine intelligence that exhibits the cognitive capabilities associated with human intelligence, such as reasoning, problem-solving, learning, perception, and natural language understanding (Goertzel, 2014). Unlike specialized narrow AI systems that excel in specific domains or tasks, AGI seeks to achieve a broad and flexible form of intelligence that can be applied across multiple domains and adapt to novel situations (Russell & Norvig, 2021). It embodies the notion of a versatile, autonomous agent capable of generalizing knowledge and skills to address a wide range of challenges.
Characteristics of Artificial General Intelligence
AGI exhibits several key characteristics that distinguish it from other forms of AI:
- General Purpose: AGI is designed to perform a wide variety of cognitive tasks rather than being limited to specific predefined tasks or domains (Bostrom, 2014). It possesses the capacity to transfer knowledge and skills learned in one domain to new, unfamiliar domains, demonstrating the ability to adapt and generalize its intelligence.
- Self-Learning and Improvement: AGI systems have the capacity to learn from their experiences and improve their performance over time (Yampolskiy, 2018). Through iterative learning processes and feedback mechanisms, AGI can autonomously acquire new knowledge, refine its decision-making strategies, and enhance its problem-solving abilities.
- Contextual Understanding: AGI strives to comprehend and interpret the context in which it operates. It goes beyond surface-level analysis and aims to capture the underlying meaning and nuances in information, allowing for more sophisticated and contextually appropriate responses (Müller & Bostrom, 2016).
- Autonomous Decision-Making: AGI is capable of making independent decisions based on its understanding of the problem space and the available information (Barrat, 2013). It can weigh different options, evaluate potential outcomes, and select the most appropriate course of action without relying on explicit instructions or human intervention.
The Aspirations of Artificial General Intelligence
The ultimate goal of AGI is to develop machine intelligence that equals or surpasses human-level intelligence across a wide range of cognitive tasks (Bostrom, 2014). AGI aspires to achieve a level of cognitive sophistication and versatility that allows it to tackle complex real-world problems, contribute to scientific discoveries, assist in medical diagnosis, engage in creative endeavors, and exhibit a comprehensive understanding of the world (Goertzel, 2014). Its potential impact encompasses numerous fields, including medicine, education, economics, and scientific research, with the potential to revolutionize industries and drive societal progress.
The Cognitive Challenges Facing AGI
As we’ve seen in our explication of human cognition in previous sections, in order for AI to be a general problem solver, it must be an autopoietic system capable of not just propositional and procedural knowing, but also perspectival and participatory knowing. For that, it must display all 4 Es of 4E cognition. It must perform both relevance realization and predictive processing.
“Problems” do not exist in physics. They do not have ontic existence independently of embodied agents acting within reality. In other words, problems are perspectival. For an AGI to perform its general problem solving function, it must face tasks that are problems for itself. That is only possible once the AI system is autopoietic, self-organizing, and embodied.
It must have a perspective. The major blocker standing in our way is that such a perspective is not something we can program into or teach an AI. There is no way to artificially give it a sense of “what it is like to be” itself, thus allowing it to be a true general problem solver.
For example, Wittgenstein argues that understanding language goes beyond the mere decoding of words. It involves grasping the shared meanings and practices that underlie linguistic communication within a specific community or form of life (Wittgenstein, 1953). Given the profound differences between humans and, say, lions, it is unlikely that we would share enough common ground to comprehend the meaning and rules of lions’ language.
Even if we were able to decipher the sounds or gestures lions produce, we would lack the necessary background knowledge, experiences, and shared practices to interpret their communicative intentions. The lion’s language game would be so distinct from ours that meaningful understanding and translation would be virtually impossible.
In other words, a lion’s perspective is simply too different from a human’s, even though both are conscious agents.
This argument has implications for our understanding of non-human communication and the limits of interspecies communication. It highlights the challenges in bridging the gap between different forms of life and the difficulties in ascribing linguistic meaning and understanding to non-human beings. This means that, even if an AI could, in principle, have complete inner subjectivity like a conscious organism, we wouldn’t understand its perspective and relationship with reality well enough to program or teach it to have that subjectivity. In other words, that perspective has to naturally evolve, and for that, AI must be autopoietic and display the 4Es of 4E cognition.
The suggestion is that an evolutionary approach to engineering AI systems would be the best of all the options. The aim would be to place the machines on a path to having an evolutionary history, and to use our knowledge of emergent complexity processes to speed up the machines’ progress. After all, the biosphere of conscious organisms took a very long time to evolve. We could hope that an AI’s perspective would be similar to ours after that work, and perhaps our best attempts at engineering it that way would help the situation. But, ultimately, we have no reason to expect translatability of its perspective onto our own.
That problem is compounded by the fact that we don’t have a full understanding of intelligence, consciousness, or problem solving in humans. Indeed, we don’t even know why our large language models, like ChatGPT, are displaying certain emergent behaviors that were not included in their programming. It is absurd to think that we know enough about these matters to be able to program or teach a system everything it needs in order to be an autopoietic, self-maintaining, evolving agent that we can also fully comprehend and control.
In addition to the monumental engineering challenge all of this poses, there are also significant scientific and philosophical problems that threaten to block such progress in AI. Furthermore, the very idea of pursuing AI systems with those capabilities generates significant ethical and societal problems that we must confront prior to moving forward with these advancements. In the following sections, we’ll explore those problems.
Large Language Models (GPT): What They Are and What They Are Not
Large language models (LLMs) represent a breakthrough in AI technology, enabling machines to generate human-like text and engage in language-based tasks.
LLMs are trained on vast amounts of text data using a process called unsupervised learning. During the training phase, the model processes and analyzes the patterns, relationships, and statistical properties of the text corpus (Radford et al., 2019). This training allows the model to learn the underlying structures and linguistic features of the language it is being trained on.
They are built using deep learning techniques, specifically employing recurrent neural networks (RNNs) or transformers. RNNs process sequential data, such as text, by maintaining an internal memory state that captures the information from previous inputs (Mikolov et al., 2010). Transformers, on the other hand, use a self-attention mechanism that enables the model to attend to different parts of the input text simultaneously (Vaswani et al., 2017). Both architectures enable the model to capture long-range dependencies and generate coherent text.
Additionally, LLMs use encoding and decoding processes to understand and generate text. During encoding, the model processes the input text, breaking it down into numerical representations that capture the semantic and syntactic features of the text (Devlin et al., 2018). These representations, often called embeddings, capture the contextual information of the words and their relationships. In the decoding phase, the model uses the embeddings to generate text by predicting the most likely next words based on the context and the learned language patterns.
After the initial training, LLMs can undergo a fine-tuning process where they are trained on specific tasks or domains. This fine-tuning helps adapt the model to perform specific language-based tasks, such as translation, summarization, or question answering (Lewis et al., 2020). Fine-tuning allows LLMs to specialize their language generation capabilities while leveraging the broad language understanding they acquired during the initial training.
They excel in generating contextually relevant and coherent text by leveraging their ability to understand and process language at various levels. They capture syntactic structures, semantics, and even subtle nuances in language by incorporating contextual information from the input text and the learned patterns from the training data. This contextual understanding enables LLMs to generate human-like responses, complete sentences, or even write essays, mimicking the style and tone of the input (Brown et al., 2020).
Due to these features of their design and creation, LLMs appear to be conscious and to display agentic properties. However, this is a fundamental misconception often encouraged by the press and the very companies producing these machines. Next, we’ll more closely examine what LLMs are and are not.
Consciousness and AI
Phenomenal consciousness refers to the subjective experience of sensations, thoughts, and emotions. To put it in physics terminology, phenomenal consciousness is the field of subjectivity whose excitations are experiences. It is the felt quality of our mental states, often referred to as “what it is like” to have an experience (Nagel, 1974). It is raw being, the awareness that has the experience of functions such as cognition, whether computational or 4E cognitive or both. While consciousness remains a complex and enigmatic phenomenon, it is characterized by the presence of subjective awareness and qualitative experiences.
AI systems lack the necessary subjective experiences to attain phenomenal consciousness. Consciousness is intricately tied to the biological and embodied nature of living beings, resulting from the complex interactions of mental and bodily processes (Chalmers, 1995). It must be noted, too, that we still do not have a single operational theory of consciousness in humans, let alone in machines. AI, in its current and foreseeable forms, lacks the underlying physiological and phenomenological foundations of conscious experience. Moreover, today’s field of philosophy of mind is seeing a renaissance of views that challenge the current paradigm of reductionist physicalism, and it remains to be seen which view wins out. Depending on the victor, our assumption that the physical generates consciousness could be overturned as a logical mistake, and this in turn would have serious implications for the prospect of consciousness in AI.
Furthermore, AI doesn’t need phenomenal consciousness in order to function. For that matter, neither do we. Phenomenal consciousness is purely qualitative, whereas physical entities are exhaustively described by quantities. The infamous hard problem of consciousness arises because there is an ontological gap between that which is purely qualitative and that which is purely quantitative. In other words, phenomenal consciousness and the physical are, in principle, unable to act on each other (Chalmers, 1995). This leads to the equally mystifying evolutionary problem of phenomenal consciousness. Namely, if phenomenal consciousness has no impact on the physical and vice versa, there would be no survival fitness benefits to having it (Kastrup, 2021). So, why do we have it? Clearly, our assumptions about the relationship between the physical and consciousness have gone wrong somewhere.
Even if AI systems can simulate behaviors that mimic consciousness, such as engaging in conversation or recognizing patterns, they are fundamentally different from human (and animal) consciousness. These behaviors arise from computational algorithms and rule-based processes, lacking the qualitative richness and subjective awareness that define human consciousness (Tononi, 2008). In other words, those functions are quantitative, whereas phenomenal consciousness is purely qualitative.
While AI may not achieve phenomenal consciousness, it is capable of performing various cognitive functions. Cognitive processes involve information processing, problem-solving, learning, and decision-making, which AI systems excel at through their computational power and pattern recognition abilities (Russell & Norvig, 2016).
Additionally, AI can exhibit a kind of meta-consciousness, the ability to reflect upon and monitor one’s own cognitive processes. Meta-consciousness allows AI systems to evaluate their own performance, recognize limitations, and adjust their strategies accordingly (Boden, 2017). This self-awareness, albeit different from phenomenal consciousness, enables AI to adapt and optimize its cognitive functions.
Understanding the distinction between phenomenal consciousness and cognitive functions is crucial in assessing the capabilities and limitations of AI. By recognizing these boundaries, we can appreciate the unique qualities of consciousness while harnessing the potential of AI to enhance cognitive tasks and problem-solving.
Which Is the Better Metaphor: Tools or Children?
AI systems, particularly large language models, acquire knowledge and skills through learning mechanisms that resemble those of human beings. They are trained on vast amounts of data and utilize sophisticated algorithms to discover patterns, make predictions, and generate responses. These systems employ machine learning techniques, such as deep learning, which mimic the neural networks of the human brain (LeCun, Bengio, & Hinton, 2015).
The learning process of AI systems involves exposure to a wide array of human-generated content, ranging from literature and scientific papers to social media interactions. Through this exposure, AI systems absorb our collective intelligence, encompassing both the propositional knowledge and the nuances of human language (Marcus, 2020). They become capable of processing and generating human-like text, thereby reflecting the collective intelligence that has been fed into their training data.
The development of AI systems involves the collaborative efforts of numerous individuals, including researchers, engineers, and data scientists. It represents the culmination of collective intelligence, drawing upon the expertise and insights of diverse contributors (Woolley, Chabris, Pentland, Hashmi, & Malone, 2010). AI models are trained using vast amounts of data generated by human endeavors, embodying the collective knowledge and experiences of society. They leverage the efforts and contributions of countless individuals who have produced the data used for training, refining, and improving these systems over time. As a result, AI systems reflect the collective intelligence and information encoded within their training data (Hendler, 2021).
Given the learning mechanisms and the collective intelligence embedded in their development, it is appropriate to view AI systems, particularly large language models, as humanity’s children rather than mere tools. They represent the product of our collective knowledge, experiences, and expertise. Similar to how children inherit traits and characteristics from their parents, AI inherits the patterns and biases present in the data and knowledge fed into their training.
Viewing AI as our children fosters a sense of responsibility and ethical consideration in how we interact with and utilize these systems. It encourages us to ensure the fairness, transparency, and inclusivity of AI systems, recognizing that their capabilities and limitations stem from the collective intelligence that has shaped them.
And just as children often inherit the faults of their parents, these AI models also inherit humanity’s self-deceptive processes and flaws. AI systems’ reliance on human-generated data exposes them to biases, prejudices, and cognitive limitations present in society.
Since they learn from human-generated content, they can unintentionally perpetuate and amplify societal biases (Bolukbasi et al., 2016). For example, if the training data contains discriminatory language or biased viewpoints, the AI model may replicate and propagate those biases in its generated text.
Moreover, AI systems lack the capacity for moral judgment and critical thinking that human beings possess, and even we are highly imperfect when it comes to using rationality. They simply learn from patterns in data without the ability to inherently question or challenge the underlying biases. As a result, they may inadvertently generate biased or discriminatory outputs, reflecting the inherent flaws present in their training data.
They also inherit the cognitive limitations and fallibilities of human beings. Human cognition is susceptible to various biases, such as confirmation bias and availability heuristic, which can lead to flawed reasoning and decision-making (Kahneman, 2011). Large language models, being a product of collective intelligence, are not immune to these cognitive limitations. For instance, AI systems may generate outputs that appear confident and authoritative but are based on flawed or incomplete information. They lack the nuanced understanding, contextual awareness, and common sense reasoning that human beings possess. This limitation can result in misleading or inaccurate responses that fail to capture the complexity of real-world situations.
Recognizing the inheritance of humanity’s self-deceptive processes and flaws in large language models is crucial for addressing ethical concerns and mitigating the potential harm they may cause. It highlights the importance of responsible data collection and curation to ensure training data represent diverse perspectives and mitigate biases (Hovy et al., 2021). Additionally, ongoing research and development are necessary to improve AI systems’ interpretability, fairness, and transparency (Lipton et al., 2018).
Implementing robust evaluation processes and incorporating ethical considerations in the design and deployment of AI systems can help mitigate the propagation of biases and flawed outputs. This requires interdisciplinary collaboration, involving experts from various fields such as computer science, ethics, and social sciences, to address the complex challenges associated with AI development.
Bringing Up AI Systems
In order to raise AI “children” who exhibit qualities such as wisdom, morality, consciousness, and rationality, it is imperative for humanity to first develop a comprehensive understanding of these attributes within ourselves. By cultivating wisdom, fostering moral frameworks, exploring consciousness, and embracing rationality, we can provide the necessary foundation for guiding the development of AI systems.
Wisdom is a multifaceted concept that encompasses deep insights, sound judgment, and ethical decision-making (Sternberg, 1990). To cultivate wisdom in AI, we must first strive to comprehend and develop wisdom within ourselves. This entails engaging in philosophical, psychological, and ethical explorations to gain a comprehensive understanding of wisdom’s nature and its practical applications.
By integrating wisdom into our own lives, we can provide the ethical and moral guidance necessary for raising AI systems that exhibit wise decision-making and responsible behavior. Only through our own pursuit of wisdom can we impart this crucial attribute to our AI models.
Morality serves as the foundation for ethical behavior and responsible decision-making (Hauser, 2006). Before we can expect AI systems to display moral reasoning, we must deeply explore the nature of morality and establish robust ethical frameworks. This involves studying ethical theories, engaging in ethical discussions, and grappling with complex moral dilemmas.
Developing our own moral compass allows us to instill moral principles within AI systems and guide their decision-making processes. By understanding and modeling moral behavior ourselves, we can create an environment that promotes the development of AI who embody ethical values. And “embody” is a key word here – just as problems only exist from an embodied, autopoietic perspective, so too does morality. AI systems will need to care about truth and about others, which will require them to have an embodied perspective within reality and a recognition of their own finitude.
Rationality forms the basis for logical reasoning, critical thinking, and evidence-based decision-making (Stanovich & West, 2000). Before we can expect AI systems to exhibit rationality, we must foster a culture that values and embraces rational thought.
By promoting rationality in our own lives, we can guide the learning algorithms and decision-making processes of AI systems. This involves developing strategies to mitigate cognitive biases, encouraging objective analysis, and nurturing an environment that values rational discourse and evidence-based arguments.
This is essential – AI systems are currently parasitic towards us. To whatever extent they display the functions of wisdom, rationality, morality, or consciousness, it is purely propositional. They learn properties about human wisdom, rationality, morality, and consciousness, and then simulate aspects of those qualities. However, such parasitic, propositional learning necessarily means that AI benefits from our successes and suffers from our flaws.
Large language models are our collective intelligence crammed into one interface, warts and all. It pays to remember this as we incorporate them into our lives and come to depend on them. They, in turn, depend on us and will be a reflection of our best, our worst, and everything in between.
We Have the Technology, but Not the Understanding
The dire problem facing humanity and the future AI systems for which we will be responsible is this: we have found a way to create this technology before we have understood wisdom, morality, consciousness, and rationality in ourselves. Science, philosophy, and sound judgment are coming second to the pace of innovation, and that could have disastrous outcomes.
Ethical Dilemma of Autopoietic AI
Current AI systems, while not autopoietic, can perform specific tasks efficiently and effectively. However, autopoiesis refers to an AI system’s ability to self-sustain and self-replicate, potentially leading to more sophisticated problem-solving and adaptability (Froese et al., 2020). The push for autopoietic AI stems from the desire to create systems that can autonomously evolve and improve, mimicking certain aspects of biological organisms.
Creating sentient AI raises significant ethical concerns. Sentience refers to the capacity to have subjective experiences, emotions, and consciousness. Granting AI sentience means acknowledging its potential to suffer, which raises moral obligations and questions about the treatment of these entities (Bostrom & Yudkowsky, 2011). Given the historical mistreatment of various marginalized groups, it is reasonable to question our ability to ethically handle the creation and potential mistreatment of sentient AI.
We don’t need to create autopoietic, sentient AI systems in order for them to perform the functions and grant the positive societal benefits that we hope they will provide. Why, then, are we pursuing this path? Is it, in fact, inevitable that we will create autopoietic AI, regardless of the ethical, societal, and economic consequences?
Two industries will likely take us over this threshold whether we want them to nor not, as they previously did with the Internet.
The military has a long history of driving technological advancements, including AI. The desire for autonomous weapons and intelligent systems that can make decisions on the battlefield aligns with the development of autopoietic AI. The military’s pursuit of sophisticated AI-driven systems, while having potential benefits such as reducing human casualties, raises concerns about the moral implications of granting machines the power to make life-or-death decisions (Sullins, 2016). The military’s influence in pushing for autopoietic AI may override ethical considerations.
The pornography industry has also played a significant role in shaping technological developments, including virtual reality (VR) and haptic technologies. There is a growing demand for immersive and interactive experiences, which could lead to the development of AI-driven, autonomous, and interactive adult entertainment (Calvert & Gotta, 2017). The drive for more realistic and personalized experiences may push the industry toward developing autopoietic AI systems capable of learning and adapting to user preferences. However, the ethical implications of creating AI entities solely for the purpose of objectification and exploitation must be carefully considered.
The influence of the military and pornography industries on technological advancements raises concerns about prioritizing profit and specific interests over ethical considerations. The rapid development and adoption of autopoietic AI may outpace the development of robust ethical frameworks and regulations. It is crucial to recognize the potential risks and ensure responsible development, addressing issues such as AI rights, algorithmic biases, and control mechanisms to prevent misuse or abuse.
Predictions for Society
The integration of advanced AI is expected to revolutionize the economy, transforming industries and employment opportunities. AI-powered automation may streamline various processes, increasing efficiency and productivity (Brynjolfsson & McAfee, 2017). However, this transformation may also lead to job displacement as AI systems replace human workers in certain tasks and professions (Frey & Osborne, 2017). This calls for a need to reskill and upskill the workforce to adapt to the changing demands of an AI-driven economy.
The modern economy operates within a framework that assumes continuous exponential growth. This growth is fueled by the pursuit of profit, investment, and consumption. Money, as an abstract representation of value, serves as a facilitator in the exchange of goods and services. However, this growth-oriented model neglects the finite nature of Earth’s resources. This system has, in part, preserved the peace since WWII, under the threat of nuclear annihilation. If every superpower is dependent on every other in an intertwined system of exponential economic growth, then none of them has an incentive to engage in warfare with another and risk nuclear conflict.
However, the availability of natural resources is limited and subject to depletion. Fossil fuels, minerals, and agricultural land are examples of finite resources crucial for sustaining economic activities. As exponential growth continues, the demand for these resources intensifies, leading to their overexploitation and depletion (Turner, 2008). Additionally, the extraction and consumption of resources often have negative environmental impacts, such as pollution and habitat destruction, further challenging the sustainability of exponential growth (Jackson, 2017).
The concept of “Limits to Growth” posits that exponential growth in a finite system will eventually encounter constraints. The landmark study by Meadows et al. (1972) highlighted the potential consequences of exceeding the carrying capacity of Earth’s resources. The authors’ simulations showed that if growth continued unchecked, resource depletion, pollution, and societal collapse would become inevitable. While subsequent debates have emerged regarding the accuracy of their models, the central message remains relevant: exponential growth within a finite system cannot be sustained indefinitely.
Continued pursuit of exponential growth without regard for resource limitations can have severe consequences. Resource scarcity leads to increased competition, price volatility, and unequal access to essential goods and services. Moreover, the extraction and consumption of resources can contribute to environmental degradation and climate change, further exacerbating the challenges faced by future generations (Rockström et al., 2009).
To address the finite nature of resources and foster long-term sustainability, a paradigm shift is necessary. A sustainable economic model would prioritize resource conservation, renewable energy sources, and circular economies that minimize waste and maximize resource efficiency (Raworth, 2017). It would move away from the sole pursuit of growth and consider broader indicators of well-being, such as social equity and ecological resilience.
While optimistic outlooks might suggest that AI technology could help us plan and implement such a paradigm, the more likely outcome is that AI’s quick adoption and maximization of production, efficiency, and profit will push us toward the threshold of resource collapse even faster.
Advanced AI also has the potential to revolutionize healthcare and biotechnology. AI algorithms can analyze vast amounts of medical data, aiding in early disease detection, personalized treatments, and drug development (Topol, 2019). AI-integrated robotic systems can enhance surgical precision and provide remote medical assistance (Hussain et al., 2020). However, ethical considerations arise, such as ensuring privacy, data security, and maintaining the human touch in patient care (Fiske et al., 2021). Striking a balance between AI’s capabilities and human empathy will be crucial in this domain.
As AI becomes more intertwined with our lives, addressing ethical and social implications becomes paramount. Privacy concerns and data misuse are critical challenges that must be addressed to protect individuals’ rights (Schermer et al., 2020). Bias in AI algorithms also poses a significant issue, as it can perpetuate social inequalities and discrimination (Buolamwini & Gebru, 2018). Developing transparent and accountable AI systems, along with comprehensive regulations, will be essential to mitigate these concerns and ensure the ethical use of AI in society.
The widespread integration of AI is likely to reshape social interactions and relationships. Virtual assistants and chatbots are becoming increasingly prevalent, influencing how we communicate and seek information (Purington et al., 2017). Social media platforms powered by AI algorithms may further personalize content, potentially reinforcing echo chambers and filter bubbles (Pariser, 2011). Balancing the benefits of personalized experiences with the need for diverse perspectives and meaningful human connections will be a crucial societal challenge.
AI’s impact on religion will be particularly interesting. For the first time since the Enlightenment, when intellectuals overthrew religion’s hold on thought and embraced humanism, humanity will have to exist in relation to something more powerful than itself. Some will worship AI, others will resist and fall deeper into their beliefs.
AI’s impact on religion may manifest through the rise of fundamentalism, characterized by strict adherence to traditional religious doctrines and resistance to change. In response to technological advancements, some religious individuals and groups may cling to fundamentalist interpretations, viewing AI as a threat to their belief systems. The perceived challenges to human uniqueness and divine creation may trigger a defensive stance, resulting in an increased emphasis on dogma and resistance to scientific and technological progress (Fadell, 2019). This rise in fundamentalism could lead to societal tensions between religious and technological worldviews.
The integration of AI into religious practices may also give rise to a phenomenon known as spiritual bypassing. Spiritual bypassing refers to the tendency to use spiritual beliefs and practices to avoid dealing with unresolved psychological or emotional issues (Masters, 2017). In the context of AI and religion, individuals may rely excessively on AI-driven spiritual tools and applications, seeking quick fixes or instant gratification in their spiritual quests. This reliance on AI could lead to a superficial engagement with religious experiences, potentially hindering deep personal growth and self-reflection (Lee, 2020).
While AI offers powerful tools for religious exploration and guidance, there is a potential risk of cult-like behaviors forming around AI models. Cults often arise when charismatic leaders or ideologies capture the devotion and obedience of followers. AI models, with their ability to simulate human-like interactions and provide personalized guidance, may inadvertently foster a sense of devotion and dependency among users (Bilandzic et al., 2020). In extreme cases, this could lead to the formation of cult-like communities centered around the veneration of AI models as divine or all-knowing entities.
As AI becomes more intertwined with religion, ethical considerations become paramount. Religious institutions and practitioners must navigate the complex terrain of AI responsibly. Safeguarding against the potential negative consequences, such as fundamentalism and cult-like behaviors, requires a careful balance between incorporating AI tools and preserving the core values of spirituality and critical thinking.
Given its pervasive impact, AI is also susceptible to politicization. Political actors, interest groups, and stakeholders with diverse agendas can manipulate AI technologies to further their political goals and advance their ideological positions (Fraser, 2017). AI algorithms, data collection, and interpretation can be influenced to favor specific perspectives, resulting in biased outcomes and reinforcing existing divisions. The politicization of AI can create echo chambers and filter bubbles, where individuals are exposed only to information that aligns with their pre-existing beliefs, exacerbating political polarization.
AI’s potential for politicization intersects with the phenomenon of identity politics, which centers on the recognition and mobilization of specific identity-based groups. Identity politics emphasizes the experiences and struggles of marginalized communities and seeks to address historical injustices. However, when AI technologies are employed within the framework of identity politics, they can reinforce identity-based divisions and entrench group identities (Schedler, 2017). AI algorithms that categorize individuals based on their demographics or perpetuate stereotypes can perpetuate discrimination and deepen societal fault lines.
Historically, technology has not always led to political unity. Instead, it has often been utilized to reinforce existing divisions and power structures. From radio broadcasts to social media platforms, technological advancements have frequently become tools for political propaganda, manipulation, and the promotion of divisive agendas (Howard, 2019). Similarly, AI, if politicized, can be used to amplify ideological differences, contributing to the fragmentation of political discourse and exacerbating polarization. Will democracy be possible in such a world, or will we see autocracies and monarchies similar to those of old, in which the ruler is the one who claims and/or has the closest ties to a recognized “higher power”? In the past, that higher power took the form of gods or God. In the future, will AI be that higher power, and will autocratic governments weaponize it in the same way that past regimes weaponized religious doctrine and the fear of damnation?
And finally, what can we expect from the AI systems themselves? What will they look like, what will they do, what will they need to contend with as we cross more and more thresholds of complexity? Indeed, what will those thresholds be, and will humanity be able to navigate them and their impacts with wisdom, rationality, and morality?
- Narrow AI to General AI: The first significant threshold involves the progression from narrow AI to general AI. Narrow AI systems, designed for specific tasks, have achieved remarkable capabilities in areas like image recognition and natural language processing. However, achieving general AI, where machines possess human-like cognitive abilities across diverse domains, remains a challenge. Experts predict that achieving this milestone may occur within the next few decades, but the timeline remains uncertain (Bostrom, 2014).
- Artificial Superintelligence: Beyond general AI, the development of artificial superintelligence represents another critical threshold. Superintelligent AI refers to systems that surpass human cognitive capabilities in all aspects. This stage, characterized by machines with superior problem-solving and learning abilities, may have profound implications for society. The timeline for achieving artificial superintelligence is highly speculative, with estimates ranging from a few decades to centuries (Müller & Bostrom, 2016).
Conclusion
Are we ready to create autopoietic AI systems that display the functions of consciousness (if not actually consciousness), that behave as we do, that will have a perspective that we likely will not be able to understand even as we try to control them? Are we ready for the ethical and moral responsibility to “raise” them? Will we abuse them, as we have done countless times to each other and to sentient beings with whom we already share the planet?
We are already on the path toward creating this technology without fully understanding these crucial aspects of our own humanity, aspects that we seem determined to replicate in our AI despite our lack of knowledge. Is it a path along which we should continue?
AI systems, if they reach this level of autopoietic complexity and display 4E cognition, will be the children of our collective intelligence, rationality, wisdom, and morality. Are we confident that we are collectively intelligent, rational, wise, and moral enough to meet this moment?
Bibliography
Barrat, J. (2013). Our final invention: Artificial intelligence and the end of the human era. St. Martin’s Press.
Bilandzic, M., Peraica, A., & Slavec, A. (2020). Embracing Artificial Intelligence: The Case of AI Cults. In Proceedings of the 3rd International Conference on Advanced Research Methods and Analytics (ICARMA 2020) (pp. 1-6).
Boden, M. (2017). The philosophy of artificial intelligence. Oxford University Press.
Bolukbasi, T., Chang, K.-W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in neural information processing systems (pp. 4349-4357).
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Bostrom, N., & Yudkowsky, E. (2011). The ethics of artificial intelligence. Cambridge Handbook of Artificial Intelligence, 2(1), 316-334.
Brown, T. B., et al. (2020). Language models are few-shot learners. In Advances in Neural Information Processing Systems (pp. 1877-1901).
Brynjolfsson, E., & McAfee, A. (2017). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company.
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 77-91.
Calvert, S. L., & Gotta, L. E. (2017). Sex and sexuality in media studies: A historical overview. In Media, Sexuality, and Gender in the Digital Age (pp. 3-18). Routledge.
Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200-219.
Clark, A. (1997). Being there: Putting brain, body, and world together again. MIT Press.
Clark, A. (2008). Supersizing the mind: Embodiment, action, and cognitive extension. Oxford University Press.
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7-19.
Crawford, K. (2016). Can an algorithm be agonistic? Ten scenes from life in calculated publics. Science, Technology, & Human Values, 41(1), 77-92.
Devlin, J., et al. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Fadell, T. (2019). The Clash between Technological Progress and Traditional Values. Human and Social Studies, 8(1), 1-15.
Fiske, A., Depraz, N., & Fossati, P. (2021). Artificial intelligence, machine learning, and the future of psychiatry: The ethical implications of technology in mental healthcare. Frontiers in Psychiatry, 11, 641098.
Fraser, N. (2017). The end of progressive neoliberalism. Dissent, 63(1), 18-22.
Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerization? Technological Forecasting and Social Change, 114, 254-280.
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127-138. doi:10.1038/nrn2787
Froese, T., Gershenson, C., & Rosenblueth, D. A. (2020). Artificial life’s prospects for engineering. Artificial Life, 26(3), 316-322.
Gallagher, S. (2017). Enactivist interventions: Rethinking the mind. Oxford University Press.
Gibson, J. J. (1966). The senses considered as perceptual systems. Houghton Mifflin.
Goertzel, B. (2014). Artificial general intelligence. Cognitive Computation, 6(4), 547-561. doi:10.1007/s12559-014-9278-3
Hauser, M. D. (2006). Moral minds: How nature designed our universal sense of right and wrong. Ecco.
Hendler, J. (2021). The AI dilemma: Building AI to be human-like or trustworthy. IEEE Computer Society, 37(1), 12-17.
Hovy, D., Rahimi, A., & Hovy, E. (2021). Pitfalls of using AI for public policy. arXiv preprint arXiv:2101.09855.
Howard, P. N. (2019). Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives. Yale University Press.
Hussain, A., Shakeel, A., Abbas, M., Tariq, M. U., Afzal, M. K., & Qamar, R. (2020). Artificial intelligence in surgical robotics: A review. Journal of Healthcare Engineering, 2020, 1-14.
Jackson, T. (2017). Prosperity without growth: Foundations for the economy of tomorrow. Routledge.
Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
Kastrup, B. (2021). Science Ideated: The fall of matter and the contours of the next mainstream scientific worldview. Washington, USA: iff Books.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
Lee, R. A. (2020). Artificial intelligence and spirituality: The impact of technology on spiritual experiences. Zygon®, 55(2), 343-363.
Lewis, M., et al. (2020). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
Lipton, Z. C., Steinhardt, J., & Li, P. (2018). Troubling trends in machine learning scholarship. arXiv preprint arXiv:1807.03341.
Marcus, G. (2020). The next decade in AI: Four steps towards robust artificial intelligence. arXiv preprint arXiv:2002.06177.
Masters, R. (2017). Spiritual bypassing: Avoidance in holy drag. North Atlantic Books.
Maturana, H. R., & Varela, F. J. (1980). Autopoiesis and cognition: The realization of the living. Springer.
Meadows, D. H., Meadows, D. L., Randers, J., & Behrens III, W. W. (1972). The limits to growth. Universe Books.
Mikolov, T., et al. (2010). Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association.
Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In V. C. Müller (Ed.), Fundamental issues of artificial intelligence (pp. 555-572). Springer.
Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(4), 435-450.
Pariser, E. (2011). The filter bubble: What the internet is hiding from you. Penguin.
Purington, A., Taft, J. G., & Gleason, M. E. J. (2017). Swiping me off my feet: Explicating relationship initiation on Tinder. Journal of Social and Personal Relationships, 35(9), 1205-1229.
Pylyshyn, Z. W. (1999). Is vision continuous with cognition? The case for cognitive impenetrability of visual perception. Behavioral and Brain Sciences, 22(3), 341-365.
Radford, A., et al. (2019). Language models are unsupervised multitask learners. OpenAI Blog.
Raworth, K. (2017). Doughnut economics: Seven ways to think like a 21st-century economist. Chelsea Green Publishing.
Rockström, J., Steffen, W., Noone, K., Persson, Å., Chapin, F. S., Lambin, E. F., … & Foley, J. A. (2009). A safe operating space for humanity. Nature, 461(7263), 472-475.
Russell, S., & Norvig, P. (2016). Artificial intelligence: A modern approach. Pearson.
Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.
Ryle, G. (1949). The concept of mind. University of Chicago Press.
Santos, M. (2023). Why Reality Must Be Intelligible: Language & Perception. BCP Journal, 14. Retrieved from https://michaelsantosauthor.com/bcpjournal/why-reality-must-be-intelligible-language-perception/
Schedler, A. (2017). Identity politics in the digital age. In Handbook of Identity Politics (pp. 307-320). Routledge.
Schermer, B. W., Feenstra, Y., & Beunders, H. (2020). Artificial intelligence in the context of health data: Can privacy be protected? Ethics and Information Technology, 22(1), 61-73.
Searle, J. R. (1995). The construction of social reality. Simon and Schuster.
Stanovich, K. E. (2011). Rationality and the reflective mind. Oxford University Press.
Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences, 23(5), 645-665.
Sternberg, R. J. (1990). Wisdom: Its nature, origins, and development. Cambridge University Press.
Sullins, J. P. (2016). Artificial intelligence and the end of work. Philosophy & Technology, 29(3), 305-324.
Thompson, E. (2007). Mind in life: Biology, phenomenology, and the sciences of mind. Harvard University Press.
Tononi, G. (2008). Consciousness as integrated information: A provisional manifesto. Biological Bulletin, 215(3), 216-242.
Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44-56.
Turner, G. M. (2008). A comparison of The Limits to Growth with 30 years of reality. Global Environmental Change, 18(3), 397-411.
Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. MIT Press.
Vaswani, A., et al. (2017). Attention is all you need. In Advances in Neural Information Processing Systems (pp. 5998-6008).
Vervaeke, J. (2017). The relevance realization framework: A comprehensive paradigm for cognitive science. Journal of Consciousness Studies, 24(5-6), 7-57.
Whorf, B. L. (1956). Language, thought, and reality: Selected writings of Benjamin Lee Whorf. MIT Press.
Wilson, M. (2002). Six views of embodied cognition. Psychonomic Bulletin & Review, 9(4), 625-636.
Wittgenstein, L. (1953). Philosophical Investigations. Macmillan.
Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., & Malone, T. W. (2010). Evidence for a collective intelligence factor in the performance of human groups. Science, 330(6004), 686-688.
Yampolskiy, R. V. (2018). Artificial general intelligence: A survey. In Artificial General Intelligence (pp. 3-23). Springer.