Are we our own ancestors?
Training generative AI agents to simulate "believable" human behavior.
"Reality is merely an illusion, albeit a very persistent one.” - Albert Einstein
Throughout history, philosophers have debated the concept of reality as an illusion. Even this blog’s title, “Ignore the Confusion” is derived from Yoga Vasistha, a book written sometime between 500-1400 CE. In the introduction to the book, the Hindu sage states, “The world appearance is a confusion: even as the blueness of the sky is an optical illusion. I think it is better not to let the mind dwell on it, but to ignore it.” -Vasistha from Yoga-Vasistha.”
In more recent times, Nick Bostrom popularized the simulation hypothesis which further explores the idea of reality as an illusion. The simulation hypothesis premise is as follows:
“Many works of science fiction as well as some forecasts by serious technologists and futurologists predict that enormous amounts of computing power will be available in the future. Let us suppose for a moment that these predictions are correct. One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations. Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct). Then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race.”
Bostrom concludes:
“It is then possible to argue that, if this were the case, we would be rational to think that we are likely among the simulated minds rather than among the original biological ones.
Therefore, if we don't think that we are currently living in a computer simulation, we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears.
— Nick Bostrom, Are You Living in a Computer Simulation?
A favorite meme stemming from this theory conjures, (in an ironic Kurt Vonnegut sort of way), a pimply teenager from a superior civilization playing a video game, which is our perceived reality. From time to time, this image pops into my head when I am sitting in traffic or waiting in line at the grocery store. A strange thought to be sure, but what other explanation could there be for hundreds of cars, lined up in front of me, each slowing down to rubberneck at a police officer writing a ticket to some dude in a Pontiac Firebird?
Last weekend I read about a new paper making the rounds among AI enthusiasts, related to generative AI agents that are simulating “believable” human behavior.
In the paper, “Generative Agents: Interactive Simulacra of Human Behavior”, researchers from Stanford University and Google, document experiments with generative software agents (connected to ChatGPT) training them to “remember, retrieve, reflect, interact with other agents, and plan through dynamically evolving circumstances.” These are emergent rather than pre-programmed behaviors in both individual and group agent settings
In one example, starting from a single paragraph prompt, the software agents plan their days.
“Generative agents draw a wide variety of inferences about themselves, other agents, and their environment; they create daily plans that reflect their characteristics and experiences, act out those plans, react, and re-plan when appropriate; they respond when the end user changes their environment or commands them in natural language. For instance, generative agents turn off the stove when they see that their breakfast is burning, wait outside the bathroom if it is occupied, and stop to chat when they meet another agent they want to talk to. A society full of generative agents is marked by emergent social dynamics where new relationships are formed, information diffuses, and coordination arises across agents.”
In another example, the research team created a virtual city inhabited by a group of 25 agents. The purpose of the experiment was to investigate how these agents coordinate with each other, with little intervention from the researchers.
The researchers told one of the software agents that she wanted to host a Valentine’s day party. This simple prompt resulted in the agent inviting friends, and decorating the venue with help from other agents. An agent mentioned that it has a crush on another agent and would like to invite that agent to the party. Then, on Valentine’s day, five agents showed up at the venue and enjoyed the festivities. Amazingly, these software agents, embodied several “believable” human behaviors including, “the social behaviors of spreading the word, decorating, asking each other out, arriving at the party, and interacting with each other at the party,”
Keep in mind these are software programs.
Yesterday while working on this blog post, I stumbled across another example created by Tore Knabe. In this video, Knabe is experimenting with ChatGPT-driven Non-Player Characters (NPCs). Knabe plays the role of somebody who is trying to improve their social skills at a party. ChatGPT plays the role of everyone else. Knabe’s character approaches and speaks to some of the other ChatGPT created characters in the VR environment all while being coached by another NPC. Knabe, a Unity 3D developer, used OpenAI's Whisper for speech-to-text, GPT3.5-Turbo for the NPCs' "brains", and ElvenLabs for text-to-speech. In order for the NPC’s to be “trained” Knabe applied simple prompts like “[Alice is a blonde girl confidently standing with her hand in the hips. Keiko is a shy girl holding her drink in front of her. Player is looking at Alice] Hi, how are you two enjoying the party?"
It’s incredible to watch these pseudo-realistic interactions. I wouldn’t characterize this example as a “believable” simulation, however, you can see from this video where things are headed.
The implementation of these cutting-edge software agents and NPCs, with their improvisational skills and instantaneous learning capabilities, is poised to revolutionize numerous sectors. A brainstorming session with my colleague illuminated the potential for these technologies to have an impact on gaming, film and television, education and training, hospitality, travel, psychology, and in most areas where human interaction is involved. Whether in the metaverse, online via real-time responsive avatars, or in the physical world using AI-guided robots or humans, these solutions are rapidly advancing toward authentic and “believable” simulations.
Esteemed academics, researchers at prominent Big Tech companies, and independent developers are dedicating countless hours to programming software agents that can generate human-like, “believable” behavior for interactive applications. In the context of Bostrom’s simulation hypothesis, these researchers are playing the role of the super intelligent, pimply teenage overlord for these generative AI software agents. Although these agents are (arguably) not yet conscious, this research is actively pushing humanity toward the realization of Bostrom’s theory.
While its too early to be be certain if Bostrom's simulation hypothesis reflects our reality, if we do exist in a simulated reality, the nature of time and causality might be different from what we perceive as "real." The simulation(s) could include loops or cycles where individuals or their simulated counterparts experience multiple or iterative lifetimes. These individuals could be their own ancestors, caught in a temporal loop, continually reincarnating or reappearing within the simulation at different points in time. Alternatively, they could reincarnate as different species or objects similar to the spiritual essence attributed to objects, places, and creatures in traditional Animist religions, in which case my ancestors could have emerged from a tin cup, a friendly Dugong, or the town of Bedgelert, Wales.
In this context, the notion of ancestry takes on a different meaning. Rather than a linear progression of generations, it implies an intertwining of “lives” and identities. The actions and choices of simulated individuals in one iteration could impact subsequent iterations, potentially leading to the emergence of different lineages or branching paths within the simulation. Add clones to this formula, where there could be multiple versions of individuals operating in “believable” simulations, and it is very possible that we find ourselves being our own ancestors….
Whether reality is illusory, we are living in a simulation, or something else entirely, it’s important not to forget to Ignore the Confusion!