AI Limitations
Generative AI applications have three key limitations that are important for all students to understand:
- Source Content – Text created by generative AI applications like ChatGPT, Claude, and Copilot is created from indiscriminate assimilation of material contained on the internet. Because the internet contains unregulated and potentially biased material of questionable quality and accuracy, AI-generated text should be suspicious of being inaccurate, low quality, and biased. Students are encouraged to confirm the accuracy of all AI-generated ideas and to critically assess the quality and risk of bias of all AI-generated content.
- Hallucinations – AI hallucinations describe the tendency of AI applications to generate fake information in response to prompts or inquiries. Examples include providing nonexistent articles when asked to provide references for generated text. Generative AI applications, such as ChatGPT, Claude, and Copilot, have a high risk of hallucinations. Generative AI applications with a low risk of hallucinations include AI Research Assistants like Consensus and Elicit, because these draw from a defined body of academic papers. Regardless of the type of AI application used, students should always confirm the accuracy of AI-generated output.
- Confidentiality – Students must be diligent in avoiding disclosure of confidential information when interacting with an AI application. In accordance with the Health Insurance Portability and Accountability Act (HIPAA), DMSc students should not enter patient protected health information into any AI applications. It is also important to avoid sharing content from a student’s educational record, in accordance with the Family Educational Rights and Privacy Act (FERPA). Violating HIPAA or FERPA regulations can result in serious legal consequences.