2026-03-09 19:03:20

University professors are coming up with new criteria for grades due to the spread of AI

University professors are coming up with new criteria for grades due to the spread of AI

Generative models have already entered university classrooms as naturally as search engines and electronic libraries once did. Students use them for notes, drafts, and structuring ideas, while teachers are increasingly asking a question that would have sounded theoretical just a few years ago: what exactly does the university evaluate if part of the thinking work is done by an algorithm? Canadian researchers propose not to fight technology as an external threat, but to reassemble the very logic of evaluation.

A Turning Point for Higher Education

Teachers and researchers from the University of Calgary and Brock University conducted a qualitative study involving 28 educators from universities and colleges in Canada — from librarians to engineering professors. All participants agree on one thing: the education system is undergoing a turning point, as tools like ChatGPT make available that part of cognitive work that was previously considered exclusively human. The main question sounds very specific: if an algorithm can structure text, offer arguments, and even imitate academic writing style, is the teacher testing the student's knowledge or just their ability to circumvent prohibitions? In this formulation, the problem ceases to be technical and becomes methodological.

The Double-Edged Sword of Generative AI

Participants in the study describe generative artificial intelligence in an academic environment as a double-edged tool:

  • On the one hand, models write more and more "humanly", familiar ways of detecting plagiarism are gradually losing their effectiveness. Algorithms confidently formulate texts that look convincing, even if they contain factual errors, logical gaps, or reproduce biases present in the studying data.
  • On the other hand, the same technologies expand access to education: they help students with disabilities (for example, visual impairments), simplify work for those who are learning a non-native language, accelerate information retrieval, and allow them to focus on meaning rather than technical details of formatting. In this logic, universities face a dilemma: it is impossible and unethical to ban a tool that simultaneously facilitates cheating and supports learning.

A strategy based solely on "catching offenders" will not work. Instead, researchers propose clear rules for the use of AI and systematic studying in responsible interaction with it, so that assistance enhances understanding of the material rather than replacing it.

Where is the New Boundary of Assessment?

Participants in the study highlight three areas where the boundaries of what is permissible and assessed are particularly clear today.

  1. The first area — prompting, that is, the ability to set a task for the algorithm. Educators consider it a legitimate academic skill. To get a meaningful answer, the student must understand the topic, break down a complex problem into parts and formulate precise instructions. A failed request gives a vague result, forces you to clarify the parameters of the task, which in itself demonstrates the level of understanding of the material. In this sense, the quality of the request reflects the depth of preliminary thought work.
  2. The second area — critical thinking. Since generative models are capable of creating plausible texts with inaccuracies, the student must verify facts, evaluate the logic of argumentation and the reliability of sources. Some teachers already use AI responses as educational material: they offer to analyze the algorithm's "arguments", identify manipulations and unproven conclusions. Experts emphasize that ignoring this aspect is useless, since algorithmic information will inevitably become part of the professional environment of graduates (and very soon).
  3. The third area — writing, and it is here that the disputes are most acute. A compromise approach looks like this: a student can use AI for brainstorming or finding structure, but formulates arguments and main ideas independently. Linguistic editing of already written text is allowed if the author is able to critically assess the model's suggestions and preserve their own position. However, when the algorithm creates key paragraphs and logical connections, many teachers believe that an important stage is violated — a stage at which thought is born. Instead of the student, it is "born" by AI.

The "Post-Plagiarism Era" and New Standards of Honesty

From these discussions grows the idea of ​​the so-called post-plagiarism era, in which co-creation of text with AI is not automatically equated to academic misconduct, however transparency and honesty, the ability to work with the generated material become mandatory conditions. Teachers tried to formulate the principles of "valid assessment" in the context of digital assistants:

  • Firstly, universities should announce the rules for using AI in specific assignments in advance so that students understand the boundaries of what is permissible: where it is possible to use a neural network and where it is strictly forbidden.
  • Secondly, teachers will be able to evaluate not only the final text, but also the process: drafts, notes, intermediate versions and the author's reflection on how they worked with the material. This approach shifts the focus from the result to cognitive steps.
  • In addition, assignments should be focused on human judgment and context — on the ability to connect theory with personal experience, a professional situation or local data that the algorithm cannot reproduce without the participation of the author. Students are expected to develop evaluative thinking in relation to AI responses and preserve their author's voice, because not only the content of knowledge is important, but also an understanding of its sources and limitations of applicability.

As a result, Western universities face a strategic choice: perceive generative models as a threat that needs to be contained by prohibitions, or as an occasion to revise academic integrity standards and strengthen real learning. Participants in the Canadian study tend towards the second option, believing that it is this one that allows you to preserve the meaning of higher education in the age of algorithms.

Your comment / review / question
There are no comments here yet
Your comment / review
If you have a question, write it, we will try to answer
* - Field is mandatory
Chat with us, we are online!

Request a call

By submitting a request, you accept the conditions Privacy Policy