AI chatbots such as ChatGPT, Copilot, Gemini, and Claude can be powerful tools for learning, productivity, and creativity. However, it’s essential to be aware of privacy and information security risks when using these platforms in the context of higher education. Below are key things to know and consider.
1. What are AI Chatbots?
AI chatbots are programs that use artificial intelligence to understand and generate human-like responses in text or spoken language. They can assist with research, answer questions, and help with various tasks, but they also process and store information you provide.
2. Types of Information You Should Avoid Sharing
While using AI chatbots, avoid sharing personally identifiable information (PII) or sensitive institutional information. This includes, but is not limited to:
- Names, email addresses, or phone numbers
- Social Security numbers, passport numbers, or financial information
- Confidential academic data (e.g., grades, student records, unpublished research)
- Passwords or security credentials
- Institutional policies, proprietary information, or other restricted data
Why: Chatbots may store or transmit the data you share with them, potentially leading to unintentional leaks or breaches of confidentiality.
3. How Data is Collected and Used by AI Platforms
AI chatbots often collect data to improve their responses and performance. However, how that data is used, stored, and shared may differ depending on the platform. Some key points to consider:
- Data Retention: Check whether the platform retains conversations for training purposes. For instance, some platforms save input to improve the model but may share anonymized data with third parties.
- Third-Party Sharing: Some chatbots may share data with their developers or third-party organizations for various purposes, such as improving their services or providing targeted ads.
- Data Ownership: Understand who owns the data after you submit it. Many AI platforms reserve the right to use the data submitted for their purposes.
Recommendation: Review each platform's privacy policy before use, especially if discussing sensitive or institutional information.
4. Best Practices for Safe Usage
To ensure your information stays secure, follow these best practices:
- Anonymize Data: When possible, remove identifying information or sensitive content before inputting data into a chatbot.
- Use for General Queries: Stick to using chatbots for general research or inquiries rather than highly specific institutional tasks or personal information.
- Secure Devices and Networks: Make sure that your device is secure (with updated software and antivirus) and use a secure network (avoid public Wi-Fi) when accessing AI chatbots.
- Be Cautious with Output: The responses generated by chatbots may not always be accurate. Double-check any advice or information they provide, especially if using it for academic or professional purposes.
5. Institutional Policies and Guidelines
Park University has specific guidelines for using AI tools in educational and professional contexts. Be sure to:
- Follow data privacy and information security policies set by the institution, especially when using AI for university-related tasks.
- Consult with your IT or Legal departments if you are unsure whether using a chatbot in a specific context complies with institutional or legal standards.
- Be mindful of ethical considerations in using AI for academic work, research, or teaching.
6. Legal and Ethical Implications
Using AI chatbots may raise legal and ethical concerns. Keep in mind:
- Copyright Issues: AI may generate content based on proprietary or copyrighted material. Be careful when using AI-generated content in your work.
- Academic Integrity: Using AI chatbots in assignments or research could raise issues around plagiarism or academic dishonesty. Always disclose the use of AI as required by Park University’s academic policies.
- GDPR/FERPA: Park University deals with students or staff from outside the U.S., be aware of international data privacy regulations, such as the General Data Protection Regulation (GDPR). In the U.S., FERPA protects students' educational records.
Further Resources
For more detailed guidance, please refer to:
- The institution’s Information Security Policies
- The Privacy Policies of ChatGPT, Copilot, Gemini, and Claude
- FERPA and GDPR resources for handling sensitive student and staff data
If you have any concerns about using AI chatbots in a secure or compliant manner, please reach out to the IT Help Desk for further advice.
This document aims to help you use AI chatbots responsibly, ensuring privacy and security are upheld in all interactions.