I attended a lunchtime talk here at Georgia College by Dr. Cynthia Alby, who has published quite a bit about ChatGPT recently. It was so helpful as I think about what I do and don’t want to do with it, or want students to do with it. One key takeaway point is that you can plug something you wrote into ChatGPT and ask it to polish it, and you’ll get it back with grammatical improvements much as you would from Grammarly, with fewer mistakes but also no opportunity to decide for yourself about each change. The result would still be the human author’s work, but would be flagged as likely produced by a chatbot if you run it through detection software. Conversely, all it takes to convince a detector that something made by AI is authored by a human is to add some grammatical errors or a word that doesn’t exist. If you’ve been thinking that you’ll just run things through a detector, problem solved, that’s definitely not the case. Indeed, if you’ve been doing that with TurnItIn or anything else similar, you shouldn’t be. Just like the skills we need to teach students, answering questions about authenticity requires human wisdom and cannot be fully automated, although software can provide us with important data and help us in our decision making. It turns out that what students need to learn is precisely what educators either have learned or need to learn.
I decided to tell my class this semester that I’m happy for them to do anything at all with ChatGPT as long as they tell me what they did. I added that if they ask the software to write an essay and submit that, it will earn them very few points, whereas if they ask it to write an essay, and then fact check and evaluate what ChatGPT produced, that can earn points comparable to writing an essay. ChatGPT is a notorious BS-er, and evaluating what it produces involves the same research skills as writing an essay oneself. If they use it, pretend they did not, and I catch them, then that will be considered cheating and dealt with accordingly. I’m looking forward to discovering more about what this technology can and cannot do, and what it does well and what it does poorly. I’m excited at the prospect of students learning simultaneously about ancient Christian women and the latest technology.
Although not tied directly to my class this semester, on a whim I decided to ask Chat-GPT3 about the existence of God. This was the result (my questions are in bold):
Does God exist?
The existence of God is a matter of personal belief and faith. There is no scientific evidence that proves or disproves the existence of God, and different people have different beliefs about whether or not God exists. Some people believe in the existence of a higher power, while others do not.
Have the philosophical arguments for the existence of God persuaded you to believe God exists?As a machine learning model, I do not have personal beliefs or the ability to hold faith. I can provide information on the various philosophical arguments for the existence of God, but I cannot say whether or not I believe in the existence of a higher power.
I can provide information on the various philosophical arguments such as Cosmological, Ontological,Teleological, Moral argument etc. But you should check with different philosophers, scholars, theologians, scientists etc. for a more detailed and nuanced understanding.
I will be giving a talk about AI, religion, misinformation, search, and ethics here at Georgia College. More on that when the precise details are pinned down. In the meantime here are some articles and other resources on this subject:
ChatGPT and Writing Instruction (the headline is “Worried about ChatGPT? Don’t Be.”)
Eddie Arthur messed around with ChatGPT:
My son Alex has been using an AI art generator to create images to go with his music where he cannot share the original footage on YouTube due to copyright. Visit his YouTube channel to see some examples, and see his Vimeo channel for the music with the originally-intended footage since Vimeo is much laxer about such things.