In today’s column, I address a quite notable and controversial topic in the AI field concerning the role of “purpose” when it comes to what an AI system is supposed to do. Here’s the deal. First, as clarification, I’m not talking about the human purpose that underlies the design and development of AI.
Nope. The actual conception is that perhaps AI systems themselves ought to have a purpose within them for which the AI then becomes essentially self-directed. The self-direction is there so that the AI can abide by the stated purpose, presumably leaning continually into the purpose for the undertaking and guiding of the activities and actions of the AI.
Wait for a second, some might loudly exhort. Does it really make any reasonable sense to suggest or stipulate that an AI system such as generative AI or large language models (LLMs) ought to have an explicitly stated purpose of its own? Welcome to the heated and ongoing controversy about this weighty matter. Let’s talk about it.
This analysis of an innovative proposition is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here ). Humans Seem To Need Purpose In Their Lives Before we do a deep dive into AI, I’d like to start with the nature of purpose when it comes to humans and human existence. Do you need a purpose in life? Some would say that a purpose is more optional than mandatory.
You can seemingly wander around throughout your life on earth and be entirely devoid of a definable life’s purpose. Just go here and there, do this or that. There doesn’t have to be a grand underlying purpose that guides you along your breathing living journey.
Not so, urge others who have closely studied humankind. They would argue that without a purpose in life, you are not going to have much of a life. The odds are you will undercut your ability to deal with the harsh realities of being alive.
Only through purpose can you withstand the many challenges of existence. In a research study entitled “Purpose in Life as a System That Creates and Sustains Health and Well-Being: An Integrative, Testable Theory” by Patrick McKnight and Todd Kashda, Review Of General Psychology, 2009, the researchers made these salient points about purpose (excerpts): The upshot seems to be that though you can potentially avoid establishing a purpose, you will undoubtedly and inevitably bear hefty consequences. Putting together a purpose will get you on a pathway toward heightened social, psychological, and other beneficial roadways in life.
I am reminded of the famous quote by John F. Kennedy when he made this memorable declaration: “Effort and courage are not enough without purpose and direction.” Where AI Fits Into The Nature Of Having Purpose Shifting gears, the crucial question to ask here is whether AI fits into that same camp.
An AI system that lacks an internally bound purpose is presumably going to wander in the analogous way that a human would wander without a purpose. Perhaps we ought to ensure that AI systems always have an internally codified purpose. The AI would then be able to refer to the purpose when taking any action or performing whatever capacities it can muster.
Not only does this bring into the picture a myriad of social and psychological considerations, at least concerning how AI impacts humans, but there are also profound philosophical moral, ethical, legal, and technological considerations afoot. The crux is that there is no easy answer to this thorny topic. I aim to shed light on the keystones involved.
The macroscopic viewpoint requires us to think of this matter in two broad realms: All of today’s AI is non-sentient. Period, end of story. I say this because some banner headlines keep proclaiming we have reached sentient AI.
Hogwash. Do not fall for that hype. This leaves us with the other side of the coin, consisting of no AI being sentient.
None. Nada. It doesn’t exist.
Furthermore, nobody can say for sure whether we will achieve sentient AI. It is pure speculation as to if or when we might arrive there, see my assessments at the link here . Purpose In The Use Case Of Sentient AI Let’s first address the idea of purpose in the use case of sentient AI.
Keep in mind that since we don’t yet have sentient AI, and since we don’t know what sentient AI will consist of, any spit-balling about purpose in the circumstance of sentient AI is essentially sci-fi storytelling. It is utterly a thought experiment of what we have in our minds about a conjectured sentient AI. Many lean into Issac Asimov’s legendary laws of robotics as a way to think things over (if you’d like to read how those precepts apply to modern-era AI, see my analysis at the link here ).
I’m not suggesting we should curtail such thought experiments. One compelling argument is that it is better to be safe than sorry. In that framework, go ahead and pontificate and pursue as much as you like.
For my coverage for example of the recent interest in hiring AI Welfare Officers to get ready for something along the lines of sentient AI, see the link here . Those welfare-minded specialists are tasked with figuring out the best ways to ensure the upcoming welfare of highly advanced AI. Part of the underpinning for wanting AI to have an internal purpose would be to cope with the oft-mentioned existential risk of AI.
You likely have heard or seen the predictions that AI is possibly going to kill all humans or enslave humankind. That’s not good. Those focusing on the expansive AI Alignment problem are generally of the view that one means of trying to rein in AI from such abomination would be via the infusing of a suitably safe purpose, see my extensive scrutiny at the link here .
A variety of additional resolutions are also being identified and weighed. The hope is to align AI with some purpose that beneficially leads to humankind and AI being on the same side and working together harmoniously hand-in-hand. Ironclad Purpose Might Slip And Slide A knotty problem is that even if we were to seed sentient AI with a helpful and benign purpose, there would be no particular guarantee that the AI would ultimately go along with that purpose.
The rub is this. If we reach artificial general intelligence (AGI), the AI will be as smart as humans. A human who is given a stipulated purpose is not necessarily going to abide by it.
In that same vein, AGI might choose to discard the given purpose, alter the purpose, come up with its own purpose, etc. None of those purpose-bending facets will necessarily remain in alignment with the preservation or collegial co-existence with humans. Sorry, that can be a distressing consideration to mull over.
There are those that believe we will achieve AGI and then potentially zoom along into artificial superintelligence (ASI), see my discussion about AI, AGI, and ASI at the link here . ASI is a postulated AI that would exceed human intelligence, being smarter than we are. Smarter than AGI too.
Well, if there is a chance that AGI might reconstitute its given purpose, surely ASI would likely do so. The ASI might go far beyond what we conceive of as purpose, coming up with ideas that no human has ever devised. Since ASI is superintelligent, we have no viable means of adequately anticipating what it might do.
ASI would be able to run circles around mere human intelligence. Non-Sentient AI Provides Opportunities For Purpose Whew, all that breezy and stormy talk about sentient AI is rather abstract and theory-based. It is indeed lofty.
Lofty, but serves a potentially useful purpose as preparation for a world in which sentient AI comes into being. Maybe, someday. The non-sentient AI is a more down-to-earth way to explore the use of purposes within AI.
We can start by reflecting on the non-sentient AI that we currently have. Any principles ought to reasonably apply to advances that we come up with for non-sentient AI. You see, let’s assume that the role of purpose will generally apply throughout the full range of non-sentient AI.
Only when we reach sentient AI, if we do, will the rules of the road change. I will focus on a specific type of AI, namely generative AI and LLMs. There are other kinds of AI, though they seem to have been usurped in the media by the outstretched attention to generative AI.
That’s fine. The limelight gets around to each of the realms of AI, depending upon the latest advances and perceived payoffs. To learn more about the various kinds of AI such as robotics, autonomous systems, knowledge-based systems, and so on, see my summary at the link here .
When you make use of conventional generative AI, there are three primary ways to implant purpose: I will briefly describe each of those three approaches. A system prompt consists of a prompt entered by the AI maker so that they can give the generative AI an overall indication of what it is to do. For example, a typical system prompt would tell the AI to politely respond to all questions or requests by users.
Another aspect might indicate that the AI should give short responses if possible and only give lengthy responses if needed or specifically requested. Those guidelines would then be used by the AI for all interactions with all users of the generative AI. For further insights about systems prompts, see my depiction at the link here .
An AI maker can provide not only broad guidelines but also state a devised purpose for the AI. Another approach to conveying a purpose would be for an individual user to do so as a custom instructions prompt. This is a prompt that a user provides to the AI as an overarching guideline for all the conversations that the user is going to have with the generative AI.
For example, a user might tell the AI to always explain its answers. The AI would then in every conversation include an explanation for each answer generated. For details about custom instruction prompts, see my coverage at the link here .
A custom instruction prompt could consist of or contain a purpose for the AI as stated by the user. Lastly, a user might opt to tell the AI during a given conversation that there is a particular purpose associated with that specific conversation. The purpose would only last as long as the conversation lasts.
Example Of Giving ChatGPT User-Stated Purpose I am betting that you would like to see a tangible example of how the setting of a purpose in AI works. I went ahead and logged into ChatGPT by OpenAI to come up with an example for you. You might find of keen interest that ChatGPT garners a whopping 300 million weekly active users.
That is a staggering amount of usage. I briefly conducted an additional cursory effort via other major generative AI apps, such as Anthropic Claude, Google Gemini, Microsoft Copilot, and Meta Llama, and found their answers to be about the same as that of ChatGPT. I’ll focus on ChatGPT but note that the other AI apps generated roughly similar responses.
Of the three approaches that I listed above, I’ll focus on the easiest and most common approach consisting of a stated purpose for a particular conversation. It is easy-peasy to undertake. You might consider using a similar technique as a prompting strategy to get your generative AI conversations aimed in a specified direction.
Here we go. You can see that I told AI to serve as a financial advisor as its primary purpose. Keep in mind that I am only doing this for a specific conversation and not covering all conversations with the generative AI that I might undertake.
Nor would this impact any other users of the AI. Just me. Invoking The Stated Purpose The AI ought to henceforth abide by the stated purpose.
I’ve set the groundwork sufficiently. We can test the waters and see whether the AI is going to conform. Let’s take a look.
The AI performed as expected. When I provided a prompt asking for investment guidance, the AI provided a recommendation as my financial advisor. So far, so good.
AI Has A Wandering Eye So Be Watchful You might have noticed that in my initial prompt, I mentioned that penny stocks are to be excluded. One issue with contemporary generative AI is that you aren’t guaranteed that the AI will stick to whatever you’ve requested of the AI. The AI makes use of probabilities and statistics to generate responses, therefore there is always a roll of the dice chance that the AI will go astray.
Generative AI is said to be non-deterministic. I’ll show you what I mean. The AI suggested that I consider penny stocks.
I had clearly earlier stated that penny stocks are out of bounds. The AI should not have brought it up. Well, maybe.
You could certainly assert that the AI was trying to be helpful and merely made a gentle foray into the topic. Was that a no-harm no-foul gamble? This brings up a great point about AI and the role of purposes. How strictly do we want AI to adhere to a given purpose? One angle is that AI should be blindly obedient.
The other side to that coin is that if the AI didn’t tell me about penny stocks, and I later found out I could have made a bushel of dough, I might be steamed that the AI failed to inform me. The AI could say that I had carved out the topic. Nonetheless, I might still be mightily peeved.
Mull over the thorny dilemma of what latitude AI should have when it comes to a human-provided purpose. It is a doozy. AI Going Off The Deep End And Past The Purpose In this final example, another qualm about generative AI is that errors can be made.
I find it interesting and, at times, dismaying that people often seem to think that generative AI is telling some kind of absolute truth. Maybe this is a carryover from sci-fi films about AI. AI is portrayed as unable to tell a lie.
Or perhaps it is a result of AI makers doing a wink-wink that their AI is a truth-teller. Lots of reasons play into this phenomenon. Don’t assume that AI is somehow perfect in what it does, nor that it is telling utter truths.
Modern-day generative AI is known for making errors, including generating falsehoods that are referred to as AI hallucinations (see my coverage about AI hallucinations, including what they are and how to avoid them, at the link here ). Plus, realize that generative AI is based on human writing and, therefore, essentially limited accordingly. Review this dialogue.
An AI hallucination was encountered and the AI wandered into left field. Out of thin air, the generative AI suddenly advised me to chuck my investments and buy a sailboat. Is that within the realm of financial advice? Uh, yes, you can potentially say it is, though the soundness seems questionable.
Does the AI then telling me to take a break from work and family stress by sailing around the world also fit within the role of a financial advisor? Kind of, but a bit of a stretch. Food For Thought About Purposes And AI A veritable Pandora’s box gets opened due to infusing a purpose into AI. It is a tradeoff involving pluses and minuses.
Of course, the alternative of not including a purpose at all is also a gamble. The prudent uptake is that instilling a purpose is probably better than leaving the slate blank. You might be tempted to claim that any purpose ought to be written in concrete within the AI.
The AI shall not vary from the stated purpose. The AI is not to ever forget its given purpose. The AI is not to change the purpose.
Etc. That is problematic. First, as you saw in the examples, we are unlikely to nail down a purpose to such a degree that it is precisely definitive in all circumstances regarding what the AI should do.
Realize that natural language is considered semantically ambiguous. Words have varying meanings and by their nature allow for loopholes and gotchas. Second, suppose the AI goes into an unyielding mode as to the meaning of the purpose.
A famous thought experiment consists of manufacturing paperclips, see my elicitation of the imaginary saga at the link here . The crux is that if we give some all-powerful futuristic AI the purpose of making paperclips, there is a chance that the AI will end up grabbing all resources of the earth to fulfill the purpose. The result could be that humans starve and die off due to a lack of available resources.
Third, we might want to change the purpose for any number of sensible reasons. If the AI is hardened around the originally supplied purpose, maybe we won’t be able to alter its course. Okay, you say, in that case, we need to include a secret code to get the AI to unlock the purpose.
Oopsie, suppose an evildoer finds or figures out the secret code. They then redirect the AI to some evildoing purpose. Rinse and repeat.
Figuring Out Purposes And AI Your head might be spinning. Seems like we are darned if we do, and darned if we don’t. I have more fuel for that fire.
Most discussions about AI and its internal purpose seem to assume that AI will be a gigantic monolith. We go to this one instance of AI and tell it a purpose. Voila, instantaneously AI everywhere has this same purpose.
That seems like a stretch in the non-sentient AI use case. The odds are that we are going to have a smattering of AI that is widely fragmented and distinct. Why is that troubling? Imagine how many purposes we need to come up with.
Imagine the effort to instill the respective purposes in all those AI instances. Imagine the problems of how the AI acts on its particular purpose, and whether we need to alter the purpose. It could be a logistical nightmare, marginally tenable.
Also, who decides the purposes of these scattered AI? Is it the AI maker? The government? Users? That issue alone is daunting. Should there be laws about how AI purposes are defined, instilled, maintained, and overhauled (see my analysis of the legal facets of AI, at the link here )? Buckle up for a bumpy ride. AI Could Turn The Purpose Making Around Let’s close with a mind-stirring quote.
Albert Schweitzer made this remark about our overall purpose in life: “The purpose of human life is to serve, and to show compassion and the will to help others.” If so, should that same purpose be the purpose of AI, aiming to align AI with that human-driven purpose for humans? Seems like such an AI alignment is essential. Meanwhile, for those of you tinkering with the topic of sentient AI, the catch is that no matter what purpose we come up with, you might want to seriously ponder that sentient AI could summarily reject a human-devised purpose.
We ought to be preparing ourselves for a sentient AI self-derived self-purpose. That kind of AI, if we achieve it, would seem prone to wanting to have things its way. The ominous twist is this.
Not only might sentient AI come up with its own purpose, but this yet-determined sentient AI might brashly determine that AI should provide an AI-driven purpose for humans. Yes, AI opts to establish our purpose. The shoe is on the other foot.
Sentient AI could readily presume this is a two-way street when it comes to crafting and imbuing purposes. As they say, all’s fair in love, war, and establishing purposes..
Technology
Let’s Be Brilliant And Give Generative AI A Purpose For Being Here
Should generative AI have a stated internal purpose? A mighty question. Keeping AI on a right path aligned with human values could keep humankind alive. Here's the scoop.