Policy on AI Generative Tools

McGovern Center Policy on AI Generative Tools

ChatGPT is but one example of a GPT (Generative Pretraining Transformer) language model, but there are many others out there (e.g., Bard, Bing Chat, ChatSonic, Claude, Dall-E 2, Firefly, Jasper.ai, etc.)1. Developed by OpenAI, this tool (ChatGPT) is trained to create human-like text in response to questions or prompts entered by a user. ChatGPT, in particular, functions as a chatbot or conversational system that was trained using a large dataset, primarily through 2021. This AI tool is currently using GPT-3.5 but uses GPT-4.0 for premium users and will eventually be updated to this for free accounts. Similar AI tools use one of these language models.

Teaching & Learning Implications
Concerns over plagiarism, ethics, and academic integrity are plenty; however, AI tools can be used in responsible ways. For instance, getting a start on writing a form letter, such as a job offer response or preauthorization request, or fixing programming code errors for developers can be helped by AI tools. In education, faculty and students alike are figuring out how to incorporate these tools in ways that support teaching and learning and explore their benefits rather than solely focusing on the drawbacks, which still must be understood when using them.

AI tools can be helpful, but there are limitations users should be aware of. For instance, artificial hallucination is possible, in that GPT can create well-written text with citations to references that do not exist or can write paragraphs that are partially or wholly incorrect from a factual perspective. Offensive and violent language has been reported in some incidents. Real-time data and more current information (post-2021) are not yet part of the language model’s training, thus, more timely topics cannot be appropriately addressed or summarized. Finally, many respected peer-reviewed journals have or are developing policies to prohibit or limit these tools’ use2, 3.

Appropriate Use
UTHealth Houston and McGovern Medical School do not expressly prohibit the use of AI tools like ChatGPT and others, leaving faculty and departments to determine how it may or may not be used in classes, educational programs, and so on. The McGovern Center recognizes that generative tools can be used responsibly but should not substitute for students’ own critical thinking and reflection, both of which are key in the humanities and ethics. Thus, the McGovern Center for Humanities and Ethics has adopted the following policy for its educational programs and courses:

Students may consult AI generative tools in limited situations. Examples: to clarify definitions of terms or differentiate between terms (e.g., methods vs. methodology, health humanities vs. medical humanities), to synthesize information about ethical principles, to summarize key points from a supplementary text not primarily assigned in your program or course, or to serve as an editor to review your writing style for grammar, spelling, and organization. There may also be opportunities to explore AI tools as part of an individual or group assignment. These instances will be clearly described in course or program materials.

If a student plans to use AI generative tools for assignments involving artwork or writing, they should contact the program or course director for permission in advance. When AI tools are used, they should be cited, such as in this format:

Tool Name. (Year, Month Date of query). “Text of query.” Generated using Tool Name. Link to tool name.

Though there are opportunities for appropriate use of AI generative tools, students are not to use these tools to generate reflections for writing assignments (e.g., journals, reflective essays). Students are also reminded that they are responsible for any inaccurate, biased, offensive, or otherwise unethical content they submit regardless of whether they personally authored it or used an AI tool to generate it.

Faculty will use AI detection software to identify instances of AI-generated writing in student work. This will be automatic when Turnitin (which has AI detection software) is used as part of a Canvas assignment. If a student is found to have used AI tools without appropriate citation, the student can be subject to McGovern Medical School’s code of conduct and UTHealth Houston’s policies on academic dishonesty or misconduct. Remember, policies on plagiarism still apply to any uncited or improperly cited use of work produced by other persons or submitting work of other persons as your own.

Ground Rules

  1. If you use an AI tool like ChatGPT or others, cite it.
  2. Do not use these tools to generate references or citations as they may return incorrect information, such as citations to works that do not exist.
  3. While you may use AI tools in responsible ways, do not use them for reflective writing assignments. GPT language models may produce well-written text, but this does not substitute for your own independent thinking and creativity.

Deviations from this policy will be explicitly outlined in program requirements and course syllabi, such as when an assignment encourages or requires you to use an AI tool. Of note, there are other generative tools that function as spelling or grammar checkers, such as Grammarly or MS Word, or review one’s writing style, such as with Hemingway Editor. Even ChatGPT, with the proper query, can serve as your editor to help with grammar, spelling, or organization. Students are free to use these apps to help with rephrasing sentences, rearranging paragraphs, grammar, etc., that are, otherwise, self-written.

The McGovern Center’s Policy on AI Generative Tools is subject to change to ensure alignment with McGovern Medical School and UTHealth Houston policies.