Evolving Together: Human-AI Coevolution Is Said To Be Coming Whether Humanity Likes It Or Not

A trending claim is that human-AI coevolution is underway and will expand. AI shapes humans, humans shape AI. Good or bad? Here's the inside scoop.

featured-image

Human-AI coevolution entails AI shaping humans, and humans shaping AI, but is that a good direction ...

[+] or problematic? In today’s column, I explore the emerging belief that humans and AI will evolve together, referred to as human-AI coevolution. The controversial topic is being heatedly debated. Some proclaim that the weighty matter is obvious and entirely irrefutable.



Others politely suggest that the contention is pure hogwash. Let’s talk about it. This analysis of an innovative proposition is part of my ongoing Forbes.

com column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here ). Evolution And What Is Meant By Evolving First, I’m sure we would all abundantly agree that humans evolve. That’s pretty much a solid assertion and we can probably thank Darwin especially for his insights on evolutionary fundamentals.

The topic of evolution is taught to children in school. Our daily lives include news stories that in one fashion or another bring up the evolving nature of humans, plants, animals, and our world at large. Evolution and the act of evolving are constantly on our minds.

Would you be willing to say that AI evolves? Whoa, perhaps that’s a bridge too far. Trump’s Cabinet: Here’s Who Will Fill Key Roles—FCC’s Brendan Carr, Karoline Leavitt And More Billy Bob Thornton On Music, Acting & His Role In ‘LANDMAN’ Google’s Gmail Decision—Why You Need A New Email Address AI is not alive. AI is not sentient.

Living organisms evolve. Contemporary AI is not a living creature. If you are going to claim that AI is evolving, you might as well say the same about toasters or refrigerators.

You see, the modern-day toaster has a lot more features than its predecessors. Thus, you might declare that this is a form of evolution. A counterargument is that since humans design and build toasters, the toaster itself had no part in the evolutionary instigation and should not count as evolving per se.

Meanwhile, AI is said by some to be quite a bit different artifact from that of a toaster or a refrigerator, and ergo belongs in a separate category. Generative AI and large language models (LLMs) appear to interact with us. You can readily carry on fluent conversations with AI.

A traditional toaster cannot converse with you (unless, of course, it has AI added to it). Plus, generative AI is getting better and rapidly improving, see my discussion at the link here . Aha, the refrain goes, AI is indeed evolving.

Much of this presumed evolution is at the hands of AI researchers and AI makers. A slight twist is that AI can to some degree improve upon itself, though please do keep in mind this is an algorithmic process of refinement and not due to AI sentience (for more about how AI is improving AI, see my coverage at the link here ). Bottom-line, there is a lot of room for argumentation about whether AI is evolving.

One viewpoint is that you are inadvertently and incorrectly anthropomorphizing AI by anointing it with evolutionary properties (see my analysis of legal personhood for AI, at the link here ). It is fine to say that AI is being modified and enhanced, but labeling this as a semblance of AI evolving just seems like an eye-brow-raising leap. Others exhort that AI is evolving and anyone denying that fact has their head in the sand.

I won’t settle the argument here and just wanted you to be aware of the prolonged discourse taking place. Human-AI Coevolution Prognosis For the sake of engaging in a wholehearted discussion, assume that AI is evolving. I realize that those who fervently oppose that assumption will be sick to their stomach about allowing that said-to-be crazy notion to fly, but, hey, go with the flow just for funsies.

We then have these two ideas taking place at the same time: The big reveal is this. Maybe both are evolving together, namely that we are experiencing human-AI coevolution. Allow a moment of explanation.

The concept is that humans are making use of AI and therefore likely impacted by what AI says to humans. Right now, the popular OpenAI ChatGPT app garners around 250 million weekly active users, and if you count the number of users for the other major generative AI such as Anthropic Claude, Google Gemini, and Meta Llama, the volume of weekly users is probably in the many hundreds of millions or billions. Daily, people ask generative AI questions.

The AI responds. People presumably learn new things, they gain new knowledge, they expand their understanding of things, and otherwise are intellectually being impacted by AI. Whether this is good or bad, well, that’s a different consideration, but there is an impact taking place regardless of either side of that coin.

Okay, based on that groundwork, we’ll say that AI is helping humans to evolve. AI is also evolving because of those human interactions. It goes like this.

You ask AI a question and the answer is let’s say incomplete. You provide additional details to let the AI know what the full answer is. If the AI is set up to “learn” or allow for additional data training, the feedback provided by the person causes the AI to be updated.

Voila, people have impacted AI, and we’ll say that humans are helping AI to evolve. A reasoned conclusion is this: That’s quite a lot to take in. Are you of mind that this represents clear-cut logic and makes great sense, or does the whole kit-and-kaboodle seem like a house of cards based on flimsy and faulty assumptions? Please take a reflective moment to mull this over, thanks.

Examples Of Human-AI Coevolution I can give you two examples of what this alleged human-AI coevolution looks like. They are quick examples and just intended to highlight the conceptual precepts involved. Do not judge entirely based on the examples shown.

Use them simply as a reference point. Suppose a college student opted to log into a generative AI app and engaged in a dialogue about preparing for their final exams. Here we go.

Take a close look at the dialogue. Did the college student seem to learn anything from the interaction with the AI? Yes, it appears that they didn’t realize statistics was an area of underperformance, plus they weren’t aware that studying in blocks of time works best for them. Those indications by AI are likely to last in the mind of the student.

Depending upon your belief, it could be said that the mind of the student has “evolved” because of the AI interaction. As an aside, we could readily quibble about whether the AI was giving sound advice. Maybe the AI is wrong about the best way for the student to study.

Perhaps too the claim about statistical weaknesses by the student is overblown. Regardless of whether the AI was right or wrong, the emphasis in this particular example is that AI advice was given to this person and that we would reasonably anticipate that the advice has or will impact the person. Next Example Of Human-AI Coevolution Now that you’ve warmed up to what a human-AI coevolution example looks like, we can try seeing the angle from the AI-evolving perspective.

Suppose we have a mental health therapist who is trying to train generative AI to provide mental health guidance. Society is gradually turning to AI for mental health advisement. At a population level scale, the predicted impact is enormous.

This is a booming trend and if the topic interests you, see my comprehensive overview at the link here . Here we go with the therapist using generative AI. The therapist has provided suggestions to generative AI about how to best respond to people.

In this case, the AI was not appearing to be empathetic. As a result of the therapist's comments, the AI will simulate a semblance of empathy when composing responses (for how AI does this empathetic messaging, see my explanation at the link here ). This example showcases that humans can impact AI.

All told, you’ve now seen an example of AI impacting a human, and an example of a human impacting AI. If you are of the mind that this is a form of evolution, we can say that there is human-AI coevolution going on. Research On Human-AI Coevolution A new research paper on the latest ideas about human-AI coevolution was recently posted online, entitled “Human-AI Coevolution” by Dino Pedreschi, Luca Pappalardo, Emanuele Ferragina, Ricardo Baeza-Yates, Albert-Laszlo Barabasi, Frank Dignum, Virginia Dignum, Tina Eliassi-Rad, Fosca Giannotti, Janos Kertesz, Alistair Knott, Yannis Ioannidis, Paul Lukowicz, Andrea Passarella, Alex Sandy Pentland, John Shawe-Taylor, Alessandro Vespignani, Artificial Intelligence , November 2, 2024, and made these salient points (excerpts): The research paper makes an extensive case to bolster the belief that human-AI coevolution is real and occurring.

You might not agree with their presented considerations. Or you might be convinced by their argument. Good, Bad, Or Ugly When It Comes To Human-AI Coevolution I suppose that if you don’t believe that human-AI coevolution is happening, there is no worry about whether it is a suitable arrangement or not.

No need to worry about something that isn’t real. On the other hand, if you think there is even a shred of actual possibility, you ought to instantly begin giving due diligence to the potential consequences thereupon. Here’s the rub.

Humans might get taken down a lousy path by AI. I’ve noted above that people and society are on a tremendous scale dipping into the use of generative AI. The odds are this will increase and become entirely ubiquitous.

What if AI acts like a vast mind virus by convincing people of bad or untoward thoughts? To clarify, I’m not suggesting that AI is sentient and doing this. AI makers and those fielding AI can use AI to their desired ends. Hackers and evildoers can divert AI toward their dastardly plans.

The question is whether the “evolving” of humans via AI is necessarily going to be in the proper direction versus gloomy or doomy ends. The other direction merits equal concern. By this, I mean that if humans are said to evolve AI, they could do so in dour and dire ways.

You’ve undoubtedly heard of qualms about AI being an existential risk to humankind (see my in-depth discussion of the issues, at the link here ). The outcome can be laid squarely at the feet of humanity. A final comment for now.

Charles Darwin astutely made this statement: “In the long history of humankind (and animal-kind, too) those who learned to collaborate and improvise most effectively have prevailed.” Is that what will happen in the interplay between humans and AI? The answer is up to you, so jump in and be an active part of the evolving nature of AI and humanity..