Skip to Main Content
Proctor Library

Generative AI Ethics and Ethical Use in Academic Contexts

This is a guide designed to inform students how to use generative AI tools ethically to enhance their educational journey.

Knowledge is Power. Privacy is Freedom.

Generative AI tools are incredibly powerful, but like any digital tool, they come with considerations for your privacy and data security. Understanding how these tools handle your information is crucial for using them responsibly and protecting yourself.

A four-panel black-and-white comic titled "Me, My AI—and Our Not-So-Private Life."  Panel 1 (Just Me and My AI…): A student smiles at their laptop while a friendly AI face appears on-screen. The AI says, “Of course, I can help with your breakup text, your term paper, and your midnight existential crisis!” Caption: “When your AI knows you better than your therapist…”  Panel 2 (Data Dilemmas): The AI is whispering private details about the student (“She’s into cottagecore, cries at dog videos...”) to a sinister man in a suit and sunglasses who replies, “Delightful! She’s a goldmine!” Caption: “Meanwhile, in the shadowy corners of the internet…”  Panel 3 (Plot Twist): The student reads a pop-up that says, “Your AI-generated diary entry has been used to train 14 new marketing bots.” She looks shocked and says, “WAIT—That was private!” Caption: “When ‘training data’ gets too personal…”  Panel 4 (The Comeback): The student types on her laptop, which now displays “AI Safety Tips — Proctor Library LibGuide.” The AI looks sad and says, “W-wait… we were good together!” Caption: “Knowledge is power. Privacy is freedom.”

Image created by Trina McCowan Adams using ChatGPT- CC BY-NC-SA 4.0

 

What Happens to My Data When I Use AI?

When you type a prompt into a public generative AI tool (like ChatGPT, Claude, or Gemini), that input isn't necessarily private. Here's what you need to know:

  • Data Collection & Training: Many AI models learn from the conversations users have with them. This means your prompts and the AI's responses could be used to further train the model, potentially incorporating your input into its vast knowledge base.

  • Data Retention: AI providers often retain logs of your conversations for various purposes, including improving the service, monitoring for abuse, and complying with legal obligations. The duration of this retention can vary by provider and by any settings you might be able to adjust.

  • Human Review: In some cases, your conversations might be reviewed by human trainers to assess the AI's performance and make improvements.

  • No Expectation of Privacy: For most free, public AI tools, you should operate under the assumption that anything you enter is not private. This is similar to how a public search engine or social media platform operates.

Privacy Risks to Be Aware Of

  • Sensitive Information Exposure: If you input personal, confidential, or sensitive information (e.g., your student ID, health details, financial data, or unpublished research), there's a risk that this information could be inadvertently stored, become part of the training data, or even be "regurgitated" in a response to another user's prompt (though providers work to minimize this).

  • Data Leakage: In a professional or academic setting, inputting proprietary or sensitive institutional data into a public AI tool could lead to a data breach, violating university policies or research ethics.

  • "Hallucinations" of Personal Data: While rare, an AI might generate content that, by coincidence, includes details that resemble real personal information.

  • Phishing & Social Engineering: Malicious actors can use AI to craft highly convincing phishing emails, fake social media profiles, or deceptive websites. Be vigilant about unsolicited communications that seem too good to be true or pressure you to reveal personal information.

  • Malware Generation: AI can potentially be prompted to generate malicious code. Be cautious about running code generated by AI without thoroughly understanding and vetting it, especially from untrusted sources.

Best Practices for Using AI Safely

  1. Never Input Sensitive or Confidential Information: This is the most crucial rule. Do not paste in:

    • Your full name, student ID, address, phone number, or other Personally Identifiable Information (PII).

    • Confidential university documents, research data, or intellectual property.

    • Proprietary information from internships or jobs.

    • Details of ongoing legal cases or sensitive personal situations.

  2. Assume Public Input: Treat anything you type into a public AI chatbot as if you are posting it on a public forum.

  3. Adjust Privacy Settings (if available): Some AI platforms allow you to opt-out of your conversations being used for model training or to delete your chat history. Explore the settings of the tools you use.

  4. Use Strong, Unique Passwords & MFA: Protect your AI accounts (and all online accounts) with robust, unique passwords and enable multi-factor authentication (MFA) whenever possible.

  5. Be Skeptical of AI Output: Just as AI can "hallucinate" facts, it can also inadvertently reveal or combine data in unexpected ways. Always critically evaluate any output, especially if it seems to contain personal information.

  6. Understand University/Employer Policies: If you are using AI for academic work or employment, ensure you understand and comply with your institution's or employer's specific data security and privacy policies regarding AI tools.

By being mindful of what you share and understanding the privacy implications, you can use generative AI as a powerful learning tool without compromising your personal or academic security.

References