Skip to Main Content
Proctor Library

Generative AI Ethics and Ethical Use in Academic Contexts

This is a guide designed to inform students how to use generative AI tools ethically to enhance their educational journey.

What Are AI Hallucinations?

Generative AI tools are incredibly sophisticated, often producing responses that sound convincing and authoritative. However, it's crucial to understand a significant limitation: AI models can "hallucinate." This means they can confidently generate information that is completely false, misleading, nonsensical, or even fabricate citations to non-existent sources.

 

What Are AI Hallucinations?

In the context of AI, a "hallucination" is when the model produces an output that contains false or misleading information presented as fact. It's a metaphorical term; the AI isn't experiencing a delusion like a human would. Instead, due to the way it's trained, it sometimes generates text that is statistically probable to appear in response to a prompt, even if that text has no basis in reality or is factually incorrect.

A four-panel comic titled "AI: NOT ALWAYS SMARTER THAN A 5TH GRADER" humorously illustrates a college student's experience with AI hallucination.  Panel 1: A smiling student sits at a desk, typing on a laptop. The speech bubble says, “Okay ChatGPT, write me a quick summary of Hamlet—and make it spicy!”  Panel 2: The student looks skeptical and reads from the screen. Speech bubble: “Wait… Leonardo? Frappuccino?? That doesn’t feel rig…”  Panel 3: The student holds up a phone showing a Google result: “William Shakespeare. 1600.” His speech bubble says, “Bro. You had ONE job.” A small caption below the image of the AI says, “Trust me. I ried all the scrolls.”  Panel 4: The student stands at the front of a classroom holding a sign that reads “DO NOT TRUST AI BLINDLY.” He says, “Lesson learned: AI can hallucinate harder than I did during finals week.” A professor in the background adds, “Accurate… and disturbingly relatable.”

Image created by Trina McCowan Adams using ChatGPT- CC BY-NC-SA 4.0


Common examples of AI hallucinations include:

  • Fabricated citations: Generating references to books, articles, or authors that don't exist, or misattributing quotes and findings.

  • Invented data or statistics: Creating numbers or figures that appear realistic but are entirely made up.

  • Factually incorrect statements: Presenting false information as true, even on seemingly simple topics.

  • Conflicting or nonsensical information: Providing contradictory details within the same response or generating text that doesn't make logical sense.

  • Misinterpretations: Drawing incorrect conclusions or connections between concepts.

Why Do AI Models Hallucinate?

AI models "hallucinate" for several reasons, stemming from their design and training process:

  1. Pattern Recognition vs. Understanding: LLMs are trained to predict the next most probable word in a sequence based on the patterns in their massive training data. They don't understand facts or reality in the way humans do. If a pattern suggests a citation should be present, the AI will generate one, even if no real source exists.
  2. Gaps in Training Data: If the AI's training data has gaps, biases, or inconsistencies, the model might "fill in" missing information with plausible-sounding but incorrect details.
  3. Complex Prompts: Ambiguous or overly complex prompts can sometimes confuse the AI, leading it to generate less coherent or accurate responses.

  4. Lack of Real-World Grounding: Current AI models do not have direct experience with the physical world. They learn from text and images, not from interacting with reality, which can limit their grasp of factual accuracy.

The Imperative of Verification: Why You MUST Fact-Check AI

Given the potential for hallucinations, you should never blindly trust information generated by an AI model, especially for academic work. Relying on unverified AI output can lead to:

  • Academic Misconduct: Submitting work containing fabricated sources or false information is a serious breach of academic integrity, even if you were unaware the AI "made it up."

  • Damaged Credibility: Using incorrect information in your assignments can negatively impact your grades and your reputation as a scholar.

  • Misinformation Spread: If you use unverified AI output and share it, you contribute to the spread of misinformation.

  • Hindered Learning: The act of verifying information forces you to engage critically with the material, which is a vital part of the learning process that AI bypasses if unchecked.

Strategies for Verifying AI-Generated Information

Think of AI as a brainstorming partner or a starting point, not an authoritative research tool. You are always the ultimate fact-checker and are responsible for the accuracy of your submitted work.

  1. Cross-Reference with Credible Sources: This is the most important step.

    • For factual claims: Check against encyclopedias (Wikipedia for starting points, then academic encyclopedias), reputable news organizations, government websites (.gov), and established scholarly sources.

    • For statistics: Look for data from official statistical agencies, research institutions, or peer-reviewed journals.

    • For citations: Look up every single citation provided by an AI. Does the source exist? Is the author correct? Does the content match what the AI claimed? Many AI-generated citations are entirely fake.

  2. Consult Your Library & Librarians: Librarians are experts in evaluating information. They can help you identify credible sources, navigate academic databases, and teach you advanced verification techniques.

  3. Use Search Engines Critically: Use traditional search engines (like Google, DuckDuckGo, or academic search engines like Google Scholar) to search for keywords, names, and concepts generated by the AI. Prioritize results from reputable domains (.edu, .gov, well-known academic publishers, established news organizations).

  4. Look for Consistency & Logic: Does the information make sense in context? Does it contradict other known facts? AI sometimes struggles with logical consistency.

  5. Be Skeptical of Specifics: AI is prone to "hallucinating" specific details like dates, names, precise figures, and specific URLs. Always verify these elements.
     
  6. "Reverse Image Search" for AI-Generated Images: If an AI generates an image for your use, a reverse image search (e.g., Google Images, TinEye) can sometimes reveal if similar images exist or if the image itself is known to be AI-generated or manipulated.

By developing strong verification habits, you can harness the benefits of generative AI while upholding the highest standards of academic honesty and critical thinking.

References