Skip to main content

Complete AI Guidelines

This page provides detailed guidelines around the use of artificial intelligence to help the members of the Saint Louis University community navigate the uses and recommended practices of artificial intelligence tools on our campuses.

  • Ethical use: AI must be used in ways that uphold academic integrity, fairness and human dignity.
  • Transparency and disclosure: Usage of AI tools in academic work, research, or administrative activities must be clearly disclosed. The disclosure should be transparent, prominent, and sufficient to allow others to understand the role AI played in the work product.
  • Privacy and data security: AI use must comply with SLU data protection policies and relevant laws (e.g., FERPA, HIPAA, GDPR).
  • Human oversight: AI should support, not replace, human decision-making in critical academic and administrative functions.
  • Non-discrimination and bias mitigation: AI technologies must be assessed for potential biases and actively designed to promote equity.
  • Do not input restricted use data: SLU community members must not input any restricted use data into generative AI tools.
  • Do not input confidential data: SLU community members must not input any confidential data into generative AI tools, except when permitted by validated contract language and security controls (approved by Information Technology Services (ITS)). SLU Madrid students, faculty and staff must also receive approval from the data protection officer (DPO). The DPO may be contacted at dpo-madrid@slu.edu.
  • Do not input personal information: SLU community members must not input any personal information about SLU employees, students, faculty, or other stakeholders into a generative AI tool except when permitted by validated contract language and security controls (approved by ITS). SLU Madrid students, faculty, and staff must also receive approval from the data protection officer (DPO). The DPO may be contacted at dpo-madrid@slu.edu.
  • Do not input information that violates intellectual property (“IP”) or general contract terms and conditions: SLU community members must be aware of the terms and conditions under which they are using AI tools. All members of the SLU community must respect IP rights with the goal of protecting those IP rights. It is incumbent on individual users to ensure that the inputs and the outputs of their AI tools are properly protected for reasons such as copyright and patent laws, data protection regulations, and identity theft crimes. Please note that vendor licenses govern many of the digital resources provided by the SLU Libraries (“Libraries”), and some publishers are asserting that using their content with AI tools is not allowed. Please contact SLU Libraries for assistance in defining acceptable uses for licensed content with an AI tool or large language model.
  • Confirm the accuracy of the output provided by generative AI tools: SLU community members must check the accuracy of information generated by generative AI tools prior to relying on such information. Generative AI tools should not be relied upon without confirmation of accuracy from additional sources. It is possible for AI-generated content to be inaccurate, biased or entirely fabricated (sometimes called “hallucinations”). Note that such AI-generated content may contain copyrighted material. You are responsible for any content that you publish that includes AI-generated material.
  • Check the output of generative AI tools for bias: SLU community members must consider whether the data input into, and the output of, generative AI tools produce decisions that may result in a disparate impact to individuals based on their protected classifications under applicable law, such as race, ethnicity, gender, national origin, age, sexual orientation, or disability status. Do not rely on any output that is indicative of a potential bias.
  • Disclose the use of generative AI tools: SLU community members who leverage generative AI to produce any written materials or other work product must disclose that those materials and that work product is based on or derives from the use of generative AI. Always be transparent if you are relying on the output of a generative AI tool.
  • Comply with third-party intellectual property rights: SLU community members must not hold out any output generated by generative AI tools as their own. If you quote, paraphrase or borrow ideas from the output of generative AI tools, confirm that the output is accurate and that you are not plagiarizing another party’s existing work or otherwise violating another party’s intellectual property rights.
  • Do not use generative AI tools to produce malicious content: SLU community members are prohibited from using generative AI tools to generate malicious content, such as malware, viruses, worms and trojan horses that may have the ability to circumvent access control measures put in place by SLU or any other third-party entity to prevent unauthorized access to their respective networks. SLU community members are prohibited from using AI tools with the intent to harm another person.
  • Complete training on effective and ethical and legal use of AI: SLU community members are expected to complete training in how to properly use AI before using any such AI tools that could implicate ethical, legal or policy considerations to ensure awareness of the issues presented and mechanisms and practices to avoid violating any such laws, regulations or policies.
  • Instruct the generative AI system not to use inputs for training the system: Some generative AI systems permit users to opt out of the use of their data to train future iterations of the generative AI system. Where that option is available, SLU community members are expected to exercise it and opt out of training.

Academic Use of AI (Teaching and Learning)

The following statements are meant to guide the use of AI tools in academic settings, including use by students for learning and use by instructors for teaching.

For Students

  • Students Must Uphold Academic Integrity: Students must follow the academic integrity policy, faculty syllabi, and any applicable guidance from the student handbook when using AI as defined in section 8, inclusive of all subtypes of AI such as generative AI, agentic AI, or other intelligent systems.
  • Instructor Permission Required: Unless an instructor includes a clear statement in their syllabus granting permission, the use of generative AI tools to complete an assignment or exam is prohibited. Unauthorized use of AI will be treated similarly to unauthorized assistance and/or plagiarism. Instructors are encouraged to insert a syllabus statement regarding the use of GAI in their courses.
  • Disclosure and Citation: If permitted by the course instructor, students are encouraged to acknowledge and properly cite any use of AI applications in their academic work, aligning with institutional academic honesty policies, disciplinary standards, or other applicable professional standards.
  • Communicate with Instructors: Students are encouraged to speak with their instructors regarding their expectations for AI tool usage in each course.

For Instructors (Instruction and Learning)

  • Clear Expectations: Instructors should provide clear expectations for AI-assisted learning tools and their appropriate use. Per the course syllabus policy, instructors are required to share clear expectations at the beginning of each semester through the syllabus (including specific syllabus statements), policy distribution and class discussion.
  • Enhance Pedagogy: AI tools should be used to enhance pedagogy, not to replace instruction and engagement.
  • Course AI Use: Develop statements about the use of AI tools in courses, clearly defining what is considered appropriate and inappropriate use.
  • Facilitate Discussion: Encourage discussions about AI in the classroom and online forums. This conversation can create opportunities to talk about the evolution of tools, their potential benefits in specific disciplines, their limitations, and how they relate to course objectives and student learning.
  • Support Resources: Utilize available support resources for AI tools in the classroom, such as individual consultations and learning communities offered by the Reinert Center for Teaching and Learning.
  • AI Detection Tools: Be aware that AI detection tools carry risks of misidentification and have not been widely proven to detect AI use. Careful consideration should be given when deciding if the use of these tools is appropriate for assessing student work.
  • Responsible Employment: Instructors should not inappropriately employ GAI in their teaching activities, including the design of courses, syllabi development, assignment development, course materials development, or the formative and summative assessment of student learning, in ways that violate institutional standards and policies.

Academic Use of AI (Research Settings)

The following statements are meant to guide researchers on the use of AI tools.

For Faculty Researchers (Research)

  • Compliance and Integrity: Researchers utilizing AI must comply with federal and institutional research integrity policies, including the responding to allegations of research misconduct policy.
  • Transparency in Methods and Authorship: Be transparent regarding AI use in research, including describing methods, acknowledgements, or elsewhere, as appropriate. AI-generated research output must be clearly identified, and authorship must reflect substantive human contributions. Researchers must comply with the terms of any federal, state or private grants with regards to AI use or allowability, as well as any written policies outlined by scientific or other journals where research output is published.
  • Accuracy Responsibility: Researchers are responsible for the accuracy of any content created by AI that is included in any research output. Use caution, as AI has been known to generate non-existent citations or images for experiments that were never conducted.
  • Unpublished Research Data: Avoid uploading or using as input any unpublished research data, including data provided by or pertaining to researchers or research subjects into generative AI tools. Doing so may lead to the disclosure of unpublished work, impede future intellectual property protection, or create privacy violations. This includes unpublished manuscripts or funding proposals that researchers may be asked to peer review, as some funding agencies (e.g., NIH, NSF) prohibit using generative AI for peer review.
  • Confidential Data: Avoid uploading, or using as input, confidential information belonging to SLU or other individuals or organizations. Generative AI tools may not provide protection for confidential information, and their use could create the potential to breach confidential contractual commitments. Examples of confidential information include unpublished manuscripts, research funding proposals, and personal information related to research subjects.
  • External Policies: Researchers are expected to follow the policies of journals, funding agencies, and professional societies through which they report their research (e.g., some journals explicitly prohibit AI-generated text, figures, images, or graphics).
  • IRB Approval: The application of AI in sensitive research areas (e.g., biomedical, social sciences) may require Institutional Review Board (IRB) or other research compliance committee approvals, as applicable. Please refer to the Institutional Review Board standard operating policies and procedures for the protection of human research subjects policy in PolicyStat for more information.

Administrative Use of AI

Administrative applications in higher education are distinct in their enterprise-wide impact, affecting faculty, staff, students, and external stakeholders. The thoughtful implementation of AI tools at the administrative level may enhance efficiency, optimize resources, and empower employees to advance the university’s mission and corporate purposes.

For Faculty and Staff (Administrative)

The following statements are meant to guide the use of AI tools in administrative settings, both for faculty and staff.

  • Auditing for Fairness: AI systems used in critical administrative functions such as admissions, hiring, student services, and decision-making must be regularly audited for fairness and accuracy. As stated previously, the implementation of such tools must go through the normal contract approval processes and be reviewed by ITS for appropriate contract language and security controls.
  • Human Review and Appeal: Automated decision-making processes must include a mechanism for human review and appeal.
  • Responsible Use of AI Tools: Staff should not inappropriately employ GAI in their work, plagiarize by submitting work that is not their own creation, or inappropriately share confidential or protected data with a GAI provider.
  • Implementation Safeguards: AI tools may be implemented by an administrative unit when appropriate safeguards are in place, their use aligns with guidance from the University AI Committee, and approval is received from relevant university authorities (e.g., ITS, Office of General Counsel, Compliance). Implementation should adhere to existing contract approval processes.