Policy on Use of Generative AI Tools for Education and Learning
OIST recognizes the recent rapid introduction of a range of generative AI tools that can benefit Students and Staff and encourages their adoption and appropriate use in OIST with due attention to ethics, and security and risk arising thereof. The integration of such AI technologies has the potential to significantly enhance productivity and outcomes as well as risk in academia. This policy provides a framework for the adoption of AI tools in the OIST Graduate School, promoting the ethical and appropriate use of AI technologies in scientific research, education, and writing. By adhering to this policy, Students and Staff can harness the potential of generative AI tools while upholding academic integrity and ensuring the responsible and inclusive advancement of knowledge.
Ethical Use of AI Tools:
- All use of generative AI tools in the OIST graduate school should adhere to ethical standards of fairness, transparency, and accountability. The tools should be evaluated for potential biases, risks, and unintended consequences before their adoption.
- Faculty and students must be aware of the limitations of generative AI tools and exercise critical judgment about their credibility when using them. Human expertise and oversight should remain an integral part of the research and education process. Generative AI tools may generate inappropriate or inaccurate output, and it is the user’s responsibility to check output for accuracy and appropriate language.
- Use of generative AI output without attribution is unethical and constitutes plagiarism.
Data Privacy and Security:
- All use of generative AI tools in the OIST graduate school must adhere to strict data privacy and security measures. Personally identifiable information and sensitive data including research findings not yet published should be handled with utmost care and in compliance with relevant laws and OIST regulations. It is better to avoid all use of personal and research output as AI prompts, as there is no certainty that these data will not be accessible to others.
- The OIST Graduate School will provide training and resources to educate faculty, staff, and students about data privacy and security best practices when using AI tools.
- The OIST IT department has assessed ChatGPT as not posing a security risk (February 2023) but users should nonetheless use appropriate caution and refrain from sharing sensitive text and images in any generative AI system.
Academic Integrity:
- AI tools should complement and enhance the learning, research and writing process rather than replace critical thinking and academic rigor. Plagiarism, data manipulation, and any form of academic misconduct remain strictly prohibited.
- Faculty should educate students about the use of appropriate AI tools in their courses and laboratories, and promote a culture of integrity and responsible scholarship.
- Students and staff who use AI tools for writing assignments, articles, presentations, and other public communications, should identify the tools that were used, and in cases of generative AI, should also record the prompts and source materials used for the output.
Legal Considerations:
- Most AI tools are provided by private companies that are opaque in their use and security of training and prompt materials. All care should be taken to protect private and confidential information, including personal and intellectual assets.
- The MEXT issued Guidelines on use of AI in July 2023, allowing the use of generative AI tools with appropriate education and with considerations of ethics and plagiarism. Passing off AI-assisted work as one’s own would be regarded as cheating. Students should avoid entering any personal information and prevent copyright infringements.
Laws and regulations associated include:
- Act on the Protection of Personal Information
- Copyright Act
- OIST PRP11 Rules for Personal Information Protection
Supplementary Provisions
This Policy on Use of Generative AI Tools for Education and Learning shall come into force from October 1, 2023.