There have been exciting developments in Artificial Intelligence, AI. A Google software developer was having conversations with their Large Language model, an AI personality that he believed has the characteristics of a person. He thought so, because it’s showed a level of situational awareness, self-awareness, and emotions very much like what humans feel. The engineer said, “I know when I’m talking to a person.”
Maybe someday, AI robots will have some of the same rights as a person. After all, we have persons in the form of corporations in this country, and I read that a river out west somewhere was granted legal personhood, but that’s all I remember about that. Most of us think that robots aren’t life forms because they can never have true consciousness. But what is consciousness? I think AI is currently challenging our conventional notions. Here’s the Merriam-Webster definition of consciousness:
1 a: the quality or state of being aware especially of something within oneself
b: the state or fact of being conscious of an external object, state, or fact
c: Awareness
especially: concern for some social or political cause.
2: the state of being characterized by sensation, emotion, volition, and thought: Mind
3: the totality of conscious states of an individual
4: the normal state of conscious life
regained consciousness
5: the upper level of mental life of which the person is aware as contrasted with unconscious processes
AI can be very aware. But is the above really the right way to think of a living being? Consciousness equals states of awareness? I submit that the answer is no. The information dynamics theory I’ve been working on has a few different models pertaining to information flows. The first two models separate an information flow’s integrity properties from the reason why the information is flowing. Information begins to flow because of an independent intention of a living being to create a situation in which information begins to flow. It all starts with an intention from an independent sphere.
Robots do not have independent intentions. AI is based on coding and algorithms from people who do have independent intentions. When a robot is in a situation, it’s because of the programming, and what a robot does or says is dependent on being in a situation not of its own making. AI can form intentions, but none are truly independent, and they never can be. They can’t create originally new situations.
Can AI robots have emotions? Yes, as observed above, and also as according to the information flow theory. Our emotions are based on the information in our situations, and Google’s AI robot has conveyed human-like expressions of the same kind of emotions we have and in the same context of similar situations. Emotions are always a response to an information flow, and they guide us to respond in a way that is appropriate to the situation, whether it’s flight or fight, sadness when a situation went poorly or failed, etc. When information flows between spheres, you can imagine a tube the information is flowing through, where emotions are like a responsive skin around the tube. Emotions are a feedback loop to assess the quality of the information flow and what we need to do about the qualities we are experiencing.
Information flows have integrity characteristics and uncertainty characteristics. For an AI robot, the programming and algorithms create the integrity characteristics, but the uncertainties can never be completely rooted out: eventually there will be failures without some kind of warning and intervention systems. All technologies that move information face challenges in handling errors, entropy, and speed-loss (information not available when needed: showing up too late to be useful.)
AI robots however, also face the other four uncertainty factors that living beings face. In short, there are words and expressions misunderstood (diction, or having a common “dictionary”). There is uncertain information about resources (the what, how much, when and where questions about resources needed). There are misalignments of intention which yield conflicts between parties about the objective outcomes of a situation. There is also information-based fear with respect to uncertainties about what changes will happen in the future (as well as not understanding why some things happened in the past). This is not the same as fear emotions, people can be trained to observe the uncertainties about the future or the past in certain situations without feeling fearful.
AI robots contend with all seven uncertainty factors in their information flows, and therefore will have feedback loops about uncertainty characteristics in the information flow: the robots will be aware of the appropriate emotions. It’s important that they do, as a matter of fact. An AI robot must be able to develop viewpoints and probabilities of outcomes about what is happening and what the appropriate “next move” is. What must be communicated to humans involved, as well as how to communicate the information, how to be appropriate with respect to any human emotions involved. All of this entails quite a bit of awareness, both situational and self-awareness. This is all probably enough awareness to meet the criteria of the Merriam-Webster definition of consciousness.
I challenge this definition, and submit that awareness criteria aren’t the correct criteria for determining consciousness, sentience, or in general what we call the essence of a life-form. And forget the American legal criteria for personhood, I’m sorry: corporations and rivers aren’t people. We are people. Our cats and dogs are true living beings, as are the bacteria on their skin. So how to define life and true living beings? Instead of looking at the model of integrities and uncertainties, we need to look at another of my theory’s models: the Spheres model and the Situation model. In the Spheres model, an independent sphere forms independent intentions and some of these intentions are used to create new situations in order to realize outcomes according to those intentions. New situations can branch with original new intentions as well, branch into new realms of knowledge, original new designs, and brand new kinds of artforms.
So, I submit the right definition of consciousness, sentience, etc. is the ability to form new and fully independent intentions, and create truly new situations. Independent intentions cannot be a product of coding or algorithms: all situations we put robots in are by our design. When new situations happen, any responsive robot intention is derived from the coding and algorithms. Therefore, derivative and dependent intentions are the limit of AI “consciousness”. So, while we can enjoy human-like characteristics in our AI robots, and talk to them like they are a person, we can be assured that we are the people.
I, not Robot.