Academic Integrity for Instructors

Academic Integrity in the Age of AI

The emergence of generative AI is disrupting traditional understanding of academic integrity, plagiarism, knowledge creation, and even authorship. While these topics are still being debated in academic and legal circles, this page offers some resources intended to help faculty and students navigate academic integrity at UWindsor as generative AI becomes embedded in academic practice.

Is Generative AI allowed to be used for academic work at UWindsor?

As with other technologies, the default position is that generative AI is allowed to be used in academic work unless the instructor has specifically stated in the syllabus that it cannot. We recommend taking a nuanced approach to determining whether AI is appropriate in a given academic task, considering that it may be fine to use in some contexts and not in others. Wherever generative AI is used substantially, especially in content generation, it is good academic practice to acknowledge (see Citation Guide below) and explain that use.

AI Detectors

AI detectors are not permitted for use at UWindsor due to their low efficacy, demonstrated biases against certain groups, and the potential for false accusations. While it may be possible in the future that a reliable method of detection is discovered, at present there are no tools that provide evidence that could be relied upon in academic integrity investigations.

Similarly, both faculty and students have been repeatedly demonstrated in the literature to be unreliable detectors of content created by generative AI, a trend likely to continue as these tools become more sophisticated. Approaching potential breaches of academic integrity with openness and trust is an important starting point, especially knowing the complexity of potential AI use cases.   

Citation Guide

The McMaster University Library has created a useful guide to acknowledging the use of AI, include when and how to do this. The guide offers advice on citation in multiple citation formats. If AI has been approved for use in your course, publication, or other academic work, this guide can help you determine the best approach to acknowledging your AI use.

Conversation guide for talking to students suspected of AI misconduct

It is very difficult to definitively prove that generative AI has been used in the creation of content, and in cases of suspected academic integrity breaches, it is always important to start with a conversation with the student. We have adapted McMaster and Connestoga College’s guiding questions to help faculty navigate that conversation. The UWindsor guide is available for download.

Academic Integrity Policy

There are two key guides to academic integrity at UWindsor - Bylaw 31 – Academic Integrity, and the Student Code of Conduct.  Where a faculty member has clearly stated in their syllabus the boundaries of acceptable use of generative AI for their course and a student is suspected of breaching those boundaries, and therefore the Student Code of Conduct, the Academic Integrity procedures in Bylaw 31 should be followed.

Canadian Higher Education Context

Dr. Sarah Eaton of the University of Calgary is one of the foremost experts on academic integrity in the world, and has shared a number of helpful resources and pragmatic advice on ways to think about academic integrity in the era of AI.

Faith Marcel and Phoebe Kang reviewed AI guidelines across Canadian higher education institutions and offer some insights to the challenges and opportunities of AI in regard to academic integrity, especially around academic writing. Moya et al. (2024) provide a scoping review of academic integrity and AI in higher education that is also insightful.

While not the higher education context, the Government of Canada also provides some very pragmatic guidance on the responsible use of generative AI in their institutions, which has a similar focus to the academic integrity universities strive to uphold.

Global Context

Universities and other organisations around the world are all grappling with the same academic integrity questions related to generative AI. Some have shared helpful advice and ways of thinking about these challenges that may help you frame your own understanding of the issues.

The Australian Government Tertiary Education Quality and Standards Agency (TEQSA)

Australia’s Tertiary Education Quality and Standards Agency (TEQSA) released a paper in August 2024 on “The evolving risk to academic integrity posed by generative Artificial Intelligence: Options for immediate action”. The paper argues that higher education institutions need systematic responses to the impact of AI on academic integrity, including technological adaptations, and changes to pedagogical approaches, policy, and institutional culture. They offer several recommendations for short term actions that should be taken, including moving away from a focus on detecting cheating, to detecting whether learning has happened, examining all programs for areas that will need adaptation, getting to know students as individuals and being transparent about use. They recommend asking students to document and show their working when using AI, engaging in conversations with them as a form of assessment, and working with faculty colleagues and educational support staff to address the issues that arise.

The European Network for Academic Integrity ENAI

The European Network for Academic Integrity released their recommendations for the ethical use of generative AI in May 2023. They recognise that generative AI poses both challenges for academic integrity, and has many potential positive uses, and encourage institutions to ensure that users know how to properly acknowledge the use of AI in their work.

Committee on Publication Ethics (COPE)

The Committee on Publication Ethics (COPE) developed a position statement on generative AI and authorship. They argue that AI cannot be considered an author as it cannot take responsibility for the work. COPE suggests that authors who use AI in writing must disclose how and what tools were used and accept responsibility for all content generated by AI.