Series: The Big Questions
I’ve worked in three fields relevant to AI and I utilize AI on a daily basis. Google Search is a type of AI. I also use ChatGPT minimally – I don’t trust it yet. I do use it to create images such as the one below. I use AI in Microsoft Word to find my errors, and I employ WordTune to refine some sentences. The reason is because my brain tends to deliver brain dumps without transition sentences. I’ve improved over the years – I know longer need transitions sentences between words. My writing is even critiqued by Patheos using artificial intelligence.
My wife claims she can see the wheels turning in my head. I suspect I’m not AI. But she laughs when she sees me turn one way then another while I try to decide which approach to take when doing simple things like making coffee. Or when I take a minute to recognize what she has asked and then think about it.
My experience with AI
In the field of AI
I worked with a company that developed AI that worked with Google search and with companies that could use AI in their product workflow. I was the “explainer” who understood the product at a management level and could develop literature for product users and marketing. I really liked the “more like this” feature they created for search results and wish Google would use it.
Semiotics and semantics for communications
The second field is working with semiotics and semantics for communications. I did original research on visual semiotics which shows how visual symbols can be used for communication in media like books and video. I use semantics to help me understand research and communicate with fewer errors. I also write books in several fields to help educate people – most are free. But if I write to entertain people, I charge for those. But I would pale in comparison to the software engineers who work with this at Google.
Spiritual field, and religion
The third is the spiritual field, and religion, which is not part of scientific investigation. Science works with the physical universe. It has no lens or capacity for examining the spiritual universe. Nor does the spiritual universe cross paths in any primary way with the physical. At the most the spiritual world might view aspects of the physical world as beneficial, not beneficial, or harmful, and warn against misapplying them – an overseer role.
The spiritual world, as I keep saying, is: The spiritual includes things that can’t be examined by science because they aren’t physical properties – they are more abstract but apply directly to living life. They are distinctly human endeavors and shaped by both human and spiritual influences. Spiritual influences are abstract and include but aren’t limited to:
- love (Eros – Romantic, Passionate Love (Of the Body), Philia – Affectionate, Friendly Love. Storge – Unconditional, Familial Love. Agape – Selfless, Universal Love. Ludus – Playful, Flirtatious Love. Pragma – Committed, Long-Lasting Love. Philautia – Self Love.)
- Justice – both social and criminal
- Life: meaning, purpose, pursuits, joy, accomplishment, psychology, sociology, life coaching
- Truth – solid evidence for different things in life leading to better than and more reliable, rather than misleading quests and illusions
- Beauty and ethos and aesthetics
- Creativity and new ideas in various fields – abstract thought
In the paradigm of who we are, which includes religion, education, culture, family, neighborhood, profession, politics, experiences and attitudes, and many other things, it’s the spirit of these things that gives us meaning and purpose. Some influence more than others. We are made in God’s image.
One thing important to realize in its application to AI is that the spiritual is about things that AI can neither fully understand nor investigate.
AI is about knowledge
AI can pull together information that people have produced and put on the Internet, or can be used internally in companies with their information. It mostly works well with structured information (categorized and labeled) but has become very proficient at working with unstructured information.
Information is qualitatively different from wisdom. Wisdom is the result of experience and competent judgment: the soundness of an action or decision regarding the application of experience, knowledge, and good judgment.
You can put experience into a table so that we know at what point a 2” x 4” board bends too far, breaks, or has sideways motion. This is very critical for architects or carpenters. So wisdom can be accumulated and investigated. For example, statements from experience working with people can be researched and used to inform decisions but not make decisions.
Wisdom is produced by people. People are on the cutting edge of everything. People create wisdom – AI not so much. And people question what is known so that fields of endeavor are continuously growing in knowledge and wisdom. So far all AI can do is collect public information. It doesn’t create original information.
As a founding financial supporter of TED Talks, I recognize these speakers as being thought leaders and on the cutting edge of research into areas not addressed by the hard sciences.
Knowledge collection is useful, not interference
One example of the usefulness of knowledge is in the field of cancer imaging. Enough imaging has been done (years ago), and interpreted by doctors that it provides immense knowledge to inform AI so that it can examine X-rays and spot cancer better than an individual human would. But when you pair AI with a doctor’s evaluation of the X-ray, you get even better results with incredibly fewer misses. AI doesn’t replace the doctor, it augments the doctor. Augmentation is a very significant aspect of AI.
Another example of technology is writing computer code. For around 20 years computers have been able to write computer code. In fact some computers have been able to create new code that is faster and more efficient. But for most code a person is required to evaluate it for errors and things that can go wrong. And the field of computer coding continuously advances because people create better ways of doing things and keep up with advancing technology. Each year I look at coding for something I’m doing and I can’t keep up with the changes.
I’ve always held that knowledge is something that is handed to us by schools or independent research and should be free. The real value is being able to apply that knowledge in a wise manner. I think MIT and Harvard agree with that and they and other major universities put excellent courses on EdX that are free or very affordable. Similarly very useful courses are offered on Udemy and other platforms. I use all of them.
What AI does is collect knowledge, and Chat GPT and similar AI applications can collect a lot of information and present it in a very useful and digestible form. Like anything it can be misused. But the information is not much different than looking things up in an encyclopedia and seeing a condensed version. It’s marginally wisdom created by the experience of writers, but it’s mostly knowledge for the reader who has not gained experience. It’s very useful.
The semantic difficulties for AI
Semantics is about meaning in language or logic. That’s kind of a broad term. On a daily basis I encounter challenges with concepts and contexts. I make some mistakes. AI makes more. Both concepts and contexts are outside the easy realm of AI – you have to be explicit in what you want. Concepts and contexts can be more subjective or abstract. (Abstract: existing in thought or as an idea but not having a physical or concrete existence.) Abstract is close to the spiritual realm.
In dealing with context, I understand that a word has many different meanings, and those meanings can change over time and in different locales. So I have to grapple with that when I read a word that came from 3500 years ago, and comprehend what it means as it traveled through time. To be precise, I need to know if the translators understood the true meaning of the word and translated it correctly. And many words are translated into languages as one word. So it makes interpretation very difficult, especially because that one word in a different language also evolves to mean different things.
How does AI work with this? Context. Look at words in the dictionary and you see many different meanings for the same word. But AI is not as adept at determining context as people. It lacks cues or experience unless you are very specific. For example, if I say I’m blue today, AI can guess that I am probably describing my mood. However, it doesn’t know that I may have spilled a can of blue paint on myself. I have stepped into a bucket of paint before, but AI doesn’t know that either.
One of the very significant innovations in the field of semantics is the free resource, Wordnet, created by Princeton University. It works with concepts. It may help AI work better with concepts. “WordNet® is a large lexical database of English. Nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms (synsets), each expressing a distinct concept. Synsets are interlinked by means of conceptual-semantic and lexical relations.”
In some ways Wordnet mimics the brain’s method of categorizing using eigenvalues. The brain forms an eigenvalue for a chair then adds different types of chairs to that category until it finds something radically different such as a rocking chair, then creates a new category.
When I look at context related to Bible passages I have many types of context I consider:
- How does this fit with the idea of love as the lens through which we view all?
- Did this word change meaning through time?
- What other influences were going on in the region?
- Which views are represented of one of a number of groups active in that area?
- Do the English words truly represent the idea of the ancient Greek, Arabic, or Hebrew word?
- And a variety of other things.
I asked ChatGPT how eigenvalues in the brain work. I got: “No results.”
You can think of eigenvalues simply as a pin on which information is collected. It’s a neural network of nodes and connections. Or if you really want to dig in see The Maximum Eigenvalue of the Brain Functional Network Adjacency Matrix: Meaning and Application in Mental Fatigue Evaluation.
I know something ChatGPT doesn’t know – ha, ha, ha!
I didn’t use ChatGPT to write this
Why? I make mental associations with many fields. A simple AI search might find specific information on individual fields but it would fail to find differences.
An example I often use is radio. I’ve enjoyed or worked with many aspects of radio. I love listening to music, was a radio announcer, and was educated and trained in electronics for designing and working on transmitters and receivers. So when someone says the word radio to me, I have to listen more closely for context.
Google search works similarly. Just the word radio will find the word in many different contexts, but if you use more words it gains an understanding of the context you want to use.
The challenge of new frontiers – AI
Every new technological development, even though incredibly beneficial, presents new challenges for us. When the Internet became widely available, some got lost in an endless search for knowledge. We’re still learning how to cope with preoccupation or addiction to video games, porn, social media, disinformation and conspiracy theories, hate speech, etc. We each have to struggle with how each of these fit into our lives in a beneficial way, or don’t fit.
Politicians have to struggle with whether or not to make regulations that prevent misuse and other harmful effects. IEEE published an article: U.S. and EU Announce Plans to Develop AI Standards. But with European and US views perhaps being widely different, will we be able to achieve a unified regulation. So far the Internet itself and social media are still a wild west show with insufficient regulation of the harmful effects.
There are many things in life that can be beneficial, or can be misinterpreted or misused: money, physical attraction, knowledge, medicine, the Internet, social media, sex, etc. The value of artificial intelligence is in presenting us with knowledge, perhaps more widely than through an encyclopedia to bring us good things into our life.
The value of the spiritual is in assessing the implications of new knowledge and either encouraging limitations on its use or encouraging beneficial use.
AI is not a terrible thing. It’s a tool that can be used beneficially or for ill.
The standard of belief and conduct for Christianity is love. God is love. We’re asked to be like God, or as Jesus has shown us.
If you find these articles intriguing, please consider joining the mailing list.
If I’ve challenged your thinking, I’ve done my job.
Our answer is God. God’s answer is us. Together we make the world better.