Artificial Intelligence (AI) has come to stay, and no one can stop it now, as we have seen with the ever-increasing and fast development of AI, since the surge of engines such as ChatGPT. What we do not know is how exactly it is going to operate in society and how much transformation, good or bad, it will cause. This vast uncharted ocean of opportunities is the beginning of probably one of the biggest revolutions in human and technological history and therefore, we must be excited about what is yet to come and be discovered. New tools that can assist us in all sorts of matters such as in hospitals or administrative offices are welcome and seen as important technological advancements, which will ultimately benefit all.
However, everyone is certainly aware of the many possible challenges that it presents. We now have the opportunity to, with a few sentences, prompt an AI engine to write anything we wish. It could be the biography of our favorite football player, the “name of that Norwegian painter that did that painting with an alien screaming,” how to properly cook pinnekjøtt, or to “write me a 2000-word essay with an introduction, main text, conclusion and references on ‘Why a lobster is more aerodynamic than a Jeep.’” The issue is that we very quickly go from asking the machine to answer simple questions, to asking it to write entire tasks in our name. What exactly do we intend when we use these machines to do everything for us, when the only purpose and duty we have at a university, or any other school, is to learn? Do not get me wrong, I strongly believe that AI has enormous potential and there are numerous people at our University of Agder (UiA) doing great things with it and experimenting with some of the many possibilities. Nonetheless, the way some students have started to use AI on group works and essays has not only jeopardized their chance to learn something useful, but also opened the path to a future where we will have to return to a past where only school exams are the safest way to avoid plagiarism and cheating – how ironic…
And yes, I do think that copying and pasting text from an AI engine is wrong when doing schoolwork, but so does UiA. Regulations have already been created on the use and misuse of AI by students. Perhaps, not with the due attention that it should be given on the part of the University and respective faculties, but it has, nonetheless, and not only to the students. UiA has also created regulations on the use of AI by lecturers, with the intent to reduce their workload and help them more easily process the hundreds of essays they need to read and evaluate per year. But what if the professors, as the students, have also started to ignore the regulations given by UiA? What if now lecturers have also started copying and pasting their AI-generated texts when commenting on students’ essays on Canvas? Who exactly is creating knowledge when students are using AI to create texts and lecturers are using AI to comment on AI? That would maybe be an interesting exercise to do on an AI dedicated course, like the ones given by the university. But what about the actual purpose of studying at a university? And what about the money that the state spends on staff in public education institutions, when AI is being used to accomplish something which is part of the work as a lecturer? Have we slowly but surely started a walk into an abyss of ignorance and laziness where no one wants to learn anymore? … What would Kant think of this? In a curious fashion most appropriate to this article, I have asked ChatGPT what Kant would think if he knew that both students and professors are using AI engines to do their schoolwork. To which it answered that
Kant would likely view the use of AI in education with caution, emphasizing the importance of autonomy and moral responsibility. He might worry that relying too much on AI could discourage independent reasoning, which he saw as essential for enlightenment and personal growth. However, Kant might appreciate AI as a tool to support learning if it fosters critical thinking and intellectual maturity, rather than enabling intellectual shortcuts that lead to dependency and intellectual laziness.
I do not have any solutions, nor do I have certainties. Though, I would like to see a debate on the part of the students and the University of Agder, about what exactly we should do in regards to the misuse of AI by students and lecturers, when the risk of seriously diminishing our brain and knowledge potential is being so terribly threatened under the guise of comfort and a certain privilege that only some, in this large world of many differences and indifferences, can actually have. The time has come to discuss this matter and find solutions that will help steer a ship that has definitely sailed and that no one is stopping now.