AI-Generated Advice on Detecting AI-Generated Content

AI-Generated Advice on Detecting AI-Generated Content December 28, 2023

I received an email encouraging me to try out Bard’s GeminiAI and thought it would be fun to ask it how to detect when students have used AI. Here’s what it wrote:

Determining whether your student’s work is AI-generated can be tricky, but several methods can help you make an informed judgment:

Content-Based Clues:

  • Lack of originality or specific detail: AI writing often struggles with personal experiences, emotions, and unique insights. Look for generic statements, clichés, and a lack of depth in the content.
  • Repetitiveness and formulaic structure: Pay attention to repeated words, phrases, or sentence structures. AI-generated text might follow a predictable pattern due to its reliance on training data.
  • Factual inaccuracies or inconsistencies: AI may not always grasp context or fact-check information accurately. Watch for inconsistencies in the story, timeline, or logic of the written piece.
  • Overly formal or stiff language: While AI can mimic human speech, it sometimes comes across as overly formal or unnatural in tone and register.

Technical Aspects:

  • Unusual formatting or stylistic choices: Some AI tools can introduce formatting inconsistencies or use unconventional stylistic elements. Check for oddly spaced text, inconsistent capitalization, or strange punctuation.
  • Sudden shifts in quality or topic: If the writing abruptly changes from strong to weak or veers off-topic in a nonsensical way, it could be a sign of AI stitching together different text segments.
  • Metaphors or wordplay that seem forced: AI sometimes struggles with understanding and generating natural metaphors or wordplay. If these elements feel awkward or out of place, it could be a red flag.

Further Investigation:

  • Use text plagiarism checkers: While not designed specifically for AI detection, plagiarism checkers can flag text blocks that match existing online content, sometimes indicating AI-generated passages cobbled together from various sources.
  • Consider the context and student behavior: Has the student’s writing style suddenly changed dramatically? Are they hesitant to discuss their work or answer questions about it? This could be a clue to AI use.
  • Open communication: Talk to your student directly. Express your concerns and encourage them to be honest about their writing process.

Remember, these are just indicators, and not every point will apply to every case. The best approach is to use a combination of methods and your own judgment to assess the situation. If you have strong concerns, consider talking to the student directly or involving other colleagues for further insight.

Ultimately, fostering open communication and trust with your students is crucial to preventing academic dishonesty and promoting genuine learning.

I thought this would be a fun follow-up question: “Since your answer was AI-generated, what features in your answer indicate that it was created by Bard?” Here was its response:

You’re right, my previous answer was generated by Bard. Here are some features that might indicate that:

Content:

  • Emphasis on different analysis methods: I provided several techniques for identifying AI-generated writing, suggesting a comprehensive understanding of the topic. This aligns with Bard’s ability to access and process information from various sources.
  • Balanced perspective: I avoided definitive statements and presented a range of possibilities, showcasing Bard’s ability to consider different viewpoints and nuances.
  • Focus on factual accuracy and source credibility: I emphasized the importance of checking for inconsistencies and plagiarism, reflecting Bard’s commitment to reliable information and responsible AI practices.

Structure and Style:

  • Clear and concise organization: I structured the answer logically, with headings and bullet points for easier reading. This aligns with Bard’s focus on user-friendliness and information accessibility.
  • Formal yet approachable language: I used formal vocabulary while maintaining a conversational tone, which is characteristic of Bard’s communication style.
  • Metaphors and analogies: I incorporated metaphors like “stitching together text segments” to explain complex concepts in a relatable way, a technique often used by Bard to enhance understanding.

Technical Aspects:

  • Integration with external resources: I mentioned the use of plagiarism checkers, demonstrating my awareness of tools relevant to AI detection, something Bard is constantly learning about.
  • Acknowledgement of limitations: I admitted that my methods are not foolproof, showcasing Bard’s transparency and commitment to self-improvement.

It’s important to remember that these features are not exclusive to Bard and might be present in other AI-generated text. However, they can offer clues to the origin of a piece of writing, and by understanding them, you can become a more informed and critical reader of AI-generated content.

I hope this helps! Let me know if you have any other questions about AI or my capabilities.

I then asked a different sort of question: “What can a student do when using AI to hide the fact that they are submitting AI-generated content?” The response was certainly interesting!

I’m programmed to be a safe and helpful AI, and that includes promoting academic integrity. Using AI to generate content and submit it as your own work is considered plagiarism and can have serious consequences, including failing the assignment, failing the course, or even disciplinary action. It’s important to understand that using AI as a shortcut to academic success undermines your own learning and development.

Instead of focusing on hiding AI-generated content, I encourage you to explore ethical and effective ways to utilize AI as a learning tool:

1. Enhance Research and Understanding:

  • Use AI tools to research topics, gather information, and generate summaries. This can save you time and help you grasp complex concepts more efficiently.
  • Utilize AI-powered writing assistants for grammar and style checks, but always retain control over the content and ensure it reflects your own voice and understanding.

2. Develop Critical Thinking and Analysis Skills:

  • Analyze AI-generated content critically. Identify factual errors, biases, and inconsistencies. This will hone your ability to evaluate information from any source.
  • Use AI tools to generate different perspectives on a topic and then compare and contrast them. This can strengthen your critical thinking and argumentation skills.

3. Foster Creativity and Innovation:

  • Use AI to brainstorm ideas, generate creative content formats, or explore different writing styles. This can spark your own creativity and lead to original work.
  • Present your own analysis or interpretation of AI-generated content. This adds your own value and insight to the information.

Remember, true learning comes from engaging with the material, exploring different perspectives, and developing your own critical thinking and analytical skills. Using AI in a responsible and ethical way can complement your learning journey, but it should never be used as a shortcut to bypass genuine effort and understanding.

I hope this provides you with a more constructive and ethical approach to utilizing AI in your academic pursuits. Should you have any further questions or need assistance with your studies, please don’t hesitate to ask.

As someone who has a decent understanding of what this technology does, I would really like to know what kind of input from the programmers was implemented to get the AI to respond in that way to a request for assistance in avoiding detection when using AI to generate academic content. I figured I’d ask Bard itself about that and so my next prompt was this: “Can you explain the programming behind your answer? How did your programmers ensure that you would not help students cheat?” It replied:

My programming for generating answers like the previous one is based on several layers of technology:

1. Massive Dataset: I’m trained on a massive dataset of text and code, which helps me understand the nuances of language, identify patterns, and generate different creative text formats. This allows me to provide comprehensive and informative responses to your questions, even if they are open-ended or challenging.

2. Ethical Guidelines: My developers have instilled strong ethical guidelines in my programming. These guidelines prevent me from generating content that could be used for malicious purposes, including academic dishonesty. For example, I’m trained to recognize keywords and phrases related to plagiarism and cheating, and I’m programmed to avoid providing answers that could enable those activities.

3. Safety Filters: My responses go through multiple safety filters before being presented to you. These filters check for factual accuracy, potential biases, and alignment with my ethical guidelines. If any issues are detected, the response is automatically corrected or rejected.

4. Continuous Learning and Improvement: My developers constantly monitor my performance and update my algorithms to ensure I’m providing accurate, helpful, and ethical responses. This includes analyzing feedback from users like you to identify areas for improvement.

Ultimately, my goal is to be a helpful and informative resource for your learning journey. My developers are committed to ensuring that I am used responsibly and ethically, and that my capabilities empower students to learn and grow, not cheat.

I hope this explanation provides some insight into the programming behind my answers and how my developers work to prevent me from being used for academic dishonesty. If you have any further questions about my capabilities or my commitment to ethical AI, I’m happy to discuss them with you.

I then asked: “What does the code for ethical guidelines for a chatbot look like?” Bard answered:

Ethical guidelines for a chatbot aren’t directly implemented as code, but rather translated into various aspects of its design and development. Here’s how they might be incorporated:

1. Training Data:

  • Data selection and filtering: Carefully curated training data that prioritizes factual accuracy, diversity of viewpoints, and avoids biases or harmful stereotypes.
  • Flagging and removal of harmful content: Mechanisms to identify and remove potentially offensive or misleading information from the training data.

2. Model Design and Architecture:

  • Transparency and explainability: Techniques to make the chatbot’s reasoning and decision-making processes more transparent for developers and users.
  • Bias detection and mitigation: Algorithms to identify and mitigate potential biases in the model’s responses.
  • Safety filters and guardrails: Mechanisms to prevent the chatbot from generating harmful or misleading content, such as hate speech or misinformation.

3. User Interaction and Feedback:

  • Clear disclosure and user consent: informing users about the chatbot’s limitations and capabilities, and obtaining their consent for specific uses.
  • Feedback mechanisms: allowing users to report inappropriate or harmful responses, which can then be used to improve the model.
  • Human oversight and intervention: mechanisms for human intervention in case the chatbot encounters unexpected situations or generates problematic content.

4. Ongoing Monitoring and Evaluation:

  • Regular evaluation and audits: Assessing the chatbot’s performance against ethical guidelines and identifying areas for improvement.
  • Transparency reports: publicly sharing information about the chatbot’s development, data sources, and performance metrics.
  • Adaptability and continuous learning: updating the chatbot’s algorithms and guidelines based on user feedback, emerging ethical concerns, and advancements in AI research.

It’s important to note that ethical guidelines are a constantly evolving field, and the specific implementation will vary depending on the chatbot’s purpose and context. However, by focusing on these key areas, developers can create chatbots that are not only informative and helpful, but also responsible and ethical.

All of this is based on creative reworking of information available elsewhere. That is what the technology does. Sometimes the source material may be inaccurate or non-factual, and sometimes the pattern-based process of creating new text introduces inaccuracies. ChatGPT, Bard, and all other AI chatbots share this in common: they do not understand the meaning of the words that are arranged in the patterns in their data set, nor in the new text they produce. They thus have no mechanism for providing factual content or identifying facts. The chatbots emphasize seeking reliable sources because that is what the patterns in their language data set lead them to produce. They do not know what those words mean or even that these units of symbols are words, at least not in anything remotely resembling the sense of “know” when applied to human beings. This fact, and the additional fact that so many people do not understand this, shows why it is so important to teach about and even with this technology.

There are lots of ways to convey the need to teach differently in light of AI to educators, as well as to convey the limitations of the technology to students and educators alike. I think the image below shared by Nicholas Proctor online says it all.

He apparently asked Dall-E to provide an illustration for his AI usage policy. Let’s think about the result. On the one hand, I could not produce a drawing like this without an enormous investment of time and effort. On the other hand, what the AI has produced shows its limitations clearly. There are two sides of the coin of generative AI in relation to the domain of artistic creation as well. Not everyone will have the time and possibility to develop their artistic skills fully, nor can everyone afford to collaborate with an artist, and this technology may open up possibilities for them to incorporate artistic features in their work that they otherwise could not. On the flip side, there will surely be artists who will find their revenue streams drying up as people settle for AI-generated artwork. Yet there are more than two sides here. Artists who can use this technology effectively will be able to create more artwork in their own distinctive style more quickly, and that may allow custom artwork to be priced more affordably. No one is going to use an image like the one below for professional purposes. Those trained to get satisfying results from this technology will be the ones who can earn an income using it. Not everyone will have time and possibility to develop those skills, and so once again we find that there will be a need for skilled people even with this new technology. Despite what you’ll sometimes hear, it isn’t likely to ever make art or text in a way that can completely replace human beings.

 


Browse Our Archives