Can Artificial Intelligence Understand Morality?

Can Artificial Intelligence Understand Morality? March 17, 2023

morality and AI
morality and AI
Photo via Possessed Photography / Unsplash

Morality and AI is a subject coming closer to relevance. Soon, AI will be able to parse how it views right and wrong, and these perceptions could leak into technology people use at work or for personal use. Discussions with Siri could become more heated, dragging into the late night hours as users uncover its philosophies about faith and ethics. But, is it possible for humanity to reach that point, and what are the implications of these advancements?

Considering topics like AI and morality before it becomes the norm is crucial for developing humanity’s expectations for structuring proactive relationships with AI and its ethics.

What Are the Types of AI?

Answering the question of AI’s morality involves knowing the definitions of AI progress and where humanity currently sits. People acknowledge AI variations without knowing the distinctions — ChatGPT is an AI, and self-driving cars are another, but many don’t make this connection. That’s because AI has a few manifestations, which can indicate when it could achieve conceptualizing its morality and ethical code:

  • Reactive machines: Machines programmed to respond to stimuli to achieve outcomes not informed by external data.
  • Limited memory: AI that learns over time with a data set humans change to include more or less information, depending on the desired outcome.
  • Theory of mind: An AI could analyze topics subjectively or make choices considering human emotional responses.
  • Self-aware: The point an AI can simultaneously experience empathy with humans and achieve self-reflection, determining its moral compass sans programming or human interference. In short, making ethical decisions or suffering moral injury as a human would.

Humans haven’t executed theory-of-mind AI yet, so self-aware AI remains fiction too. Therefore, it’s true to say that artificial intelligence cannot understand morality completely because the technology isn’t here yet. It can repeat definitions and data but not the emotional or anecdotal charge behind theological, religious, spiritual, philosophical or moral topics. 

But, will it eventually?

Eventually, technology could advance enough to where AI will understand human morality. Later, AI will self-generate its own determinations about character based on its emotional capabilities. These phases have different implications in religious and areligious circles, because ethics and morality in AI extend beyond places of worship. 

It’s bold to make a blanket statement certifying what the ethics of artificial intelligences will look like, especially when AI tech is in mid-development and humans are in-between AI stages. What the planet knows for sure is AI isn’t self-aware phase yet. Therefore, AI cannot understand morality to the same degree as humans. 

If it could happen in the future, it’s unlikely all AI will have the same code of ethics.

So, Can a Machine Learn Morality?

For skeptics and tech enthusiasts alike, everyone wants to know what kind of timeline to expect for AI. What technological advancements must humans uncover before it’s possible for AI to answer gray-area questions, or does the planet have the resources now and it’s not employed? A new AI called Delphi that can respond if it believes it’s OK to kill or put pineapple on pizza makes humans wonder.

There are several reasons ahead-of-their-time tech companies might conceal or delay AI’s philosophical and theological capabilities, but machine ethics guide tech creators in the meantime.

It is a field scientists and thinkers toy with, but it doesn’t solidify self-aware AI as a certainty. They merely use guidelines to help inform programming with a goal for machines to understand:

“… societal values, moral and ethical considerations; weigh the respective priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and guarantee transparency.”

When Earth normalizes theory of mind AI, databases might be able to relay nebulous moral quandaries with more articulated, emotive language. AI could postulate how moral or religious experiences could feel without experiencing it themselves because they could conceive how a human would respond — while perceiving itself as tech. Eventually, the ethics of artificial intelligence could be as individualized as it is in humans if it developed sentience.

If Humans Get There, What Will Happen to Ethics?

If AI understands morality, it will have just as much agency as its society allows it to make decisions. Philosophers or religious communities could fear how it could be used by governments, corporations or other faith circles to perpetuate ideas of right and wrong. However, a self-aware AI that would be capable of these abilities would have as much capability to influence human thoughts — or not — with its perceptions of morality as much as individual people do. 

For example, Christian communities shouldn’t need to worry about adopting them within their circles or not, based on 2 Timothy 3:16 NIV:

“All Scripture is God-breathed and is useful for teaching, rebuking, correcting and training in righteousness…”

In the debate of who decides right and wrong, AI will eventually come to its self-informed determinations just as humans do. It would depend on the AI’s motivations. Humans will have conversations with AI about holy or ethical texts regardless of background, but this example asserts religious individuals, or AI, could still refer to the text and personal experience for argumentation.

Additionally, AI will have to distinguish between definitions with nuance — will it know how to feel about theology versus mysticism, or the implications of an ethical choice impacting itself versus groups of people? How will AI cope with the unknown, when its entire foundation attempts to make determinations based on historical and incoming data? 

Only self-aware AI could understand morality and eventually become theologians and philosophers. Otherwise, AI resources are just textbooks. 

There’s a chance these conversations are all speculation, as it’s equally possible humans will never achieve self-aware AI. Regardless, it’s essential to mentally prepare for the happenstance. AI built to understand morality will have to overcome just as many cultural, societal and personal biases as humans, as personal development is a lifelong journey — though AI may also have to interact with data on top of everything else.

Transitioning From Disclaimers to Belief Systems in AI

For now, every religious or theological quandary posited by curious minds about AI will receive a careful disclaimer stating the technology has no belief system. Still, it can explain concepts based on its data set. For now, everyone, regardless of affiliation, can learn more about the human condition based on how AI gleans history and experiences as data.

Humans will keep interacting with it until it’s manageable and peaceful. Encountering morally aware AI will be a momentous occasion. Still, humans must interact with it with a grain of salt as all new technologies adapt with time, just as the first self-aware AI represents the first draft of possibility. 


Browse Our Archives