Return of Traditional Exams Faces Scrutiny amid Advancements in Artificial Intelligence Technology
In the evolving landscape of American education, universities are adapting to the challenges posed by AI-assisted academic dishonesty. The University of Michigan has recently added a clause to its honor code, requiring students to declare any use of AI in assignments, reflecting a broader movement towards transparency and responsible AI use.
The rise of AI-enabled cheating is a growing concern, with studies indicating an increase in its incidence. Nearly 88% of students admitted to using AI tools for assignments in some form, according to recent surveys. As a result, instructors are reconsidering how to measure learning outcomes, with many favouring handwritten exams to minimise the risk of AI interference.
Public universities and community colleges, in particular, tend to favour scalable solutions like handwritten exams in controlled environments. This approach, part of a wider strategy that includes project-based assessments, oral defenses, and iterative assignments, is designed to limit AI assistance and provide a direct, controlled environment to evaluate individual understanding.
Some institutions, like a community college in Ohio, equip their testing centers to oversee paper-based essays and include AI policies in every course syllabus. Other institutions, such as UCLA, are shifting towards oral presentations and in-class written responses. Bigger institutions, such as Brown University and Williams College, integrate discussions about ethics and AI use into smaller seminar-style courses.
Elite private universities often use small class sizes to pilot in-person assessments like case analysis and real-time writing demonstrations. Dr. Tricia Bertram Gallant, director of the Academic Integrity Office at UC San Diego, emphasises the need for cultural education on academic integrity in both analog and digital spaces.
Handwritten answers are believed to better reveal a student's original thinking and effort. However, it's important to note that AI-generated content often lacks depth, individuality, and thought progression found in genuine student work. Balancing fairness and innovation, some educational institutions offer reasonable accommodations for students with physical disabilities or learning disorders.
The International Center for Academic Integrity and the Council of Writing Program Administrators are resources for faculty and institutions regarding AI in education. As the landscape continues to evolve, institutions are adopting policies that foster ethical AI use through transparency, discussion, and human-centered approaches. This shift towards handwritten exams represents a broader shift in academia's response to digital disruption, moving away from solely policing AI use towards creating transparent, inclusive policies and assessment methods.
References: [1] https://www.insidehighered.com/digital-learning/article/2022/05/17/ai-assisted-cheating-raises-questions-about-academic-integrity [2] https://www.cnbc.com/2022/05/17/ai-cheating-is-on-the-rise-but-colleges-are-struggling-to-keep-up.html [3] https://www.insidehighered.com/digital-learning/article/2022/05/17/ai-assisted-cheating-raises-questions-about-academic-integrity [4] https://www.insidehighered.com/digital-learning/article/2022/05/17/ai-assisted-cheating-raises-questions-about-academic-integrity
Artificial intelligence technology is increasingly being used by students for educational purposes, leading to concerns about academic dishonesty. As a response, educational institutions are integrating discussions about ethics and responsible AI use into their curriculums as part of a broader strategy for education-and-self-development, focusing on learning and understanding.
The International Center for Academic Integrity and the Council of Writing Program Administrators serve as resources for faculty and institutions to address AI-assisted cheating and implement policies that foster ethical AI use through transparency, discussion, and human-centered approaches.