AI in AI: The Dilemma in Academia

Rajshree AcharyaLaw

AI in AI: The Dilemma in Academia

Imagine the dilemma of a lecturer when faced with two essay submissions from her students. One essay is crisp, well-structured, proficient in language, and consequently coherent and analytical but does not even remotely refer to any class discussions – a suspiciously AI-generated work. The other essay, though moderately coherent and conceptually rich, lacks perfect structure or grammar. However, it is critically sound and analytical, and incorporates real-life examples and classroom discussions to deepen the understanding – a seemingly human-generated work.

The solution?

Now, one may suggest using AI detectors to overcome this challenge. However, using such tools is not ideal, considering their technical shortcomings. For example, students quickly find loopholes to bypass those tools. Moreover, technical glitches in such tools have presented how they detect AI in original human-generated works. This makes them doubtful and unreliable. Hence, relying on such tools seems redundant.

Consider a situation where a policy is in place for lecturers to refer essays with high AI-generated content for academic misconduct. This can lead to steep penalties, including student expulsion.

The deserving essay

The consequences of the above example are apparent, for the human-generated essay naturally receives lower grades for the reasonable human errors made. A powerful AI prompt produces a sophisticated and grammatically correct essay in seconds. It bypasses hours of research and critical thinking a carefully thought, analytical essay would require, along with hours of dedication.

This scenario reflects the reality of academia today. The rise of large language models (LLMs) like ChatGPT has created challenges for both lecturers and students. While AI has revolutionised various fields, its impact on academia raises genuine concerns. A considerable percentage of academicians are extremely pessimistic about using AI. Moreover, there is a difference between detecting AI use and works entirely generated by AI. This means that while AI tools are used, rampant AI-fueled cheating is not as widespread as initially feared by some.

The mighty LLMs

To put it simply, LLMs are advanced AI systems. They are large deep-learning models that are pre-trained on vast amounts of data. These models utilise various techniques to understand, predict, and generate human-like texts in multiple formats such as stories, articles, and poems. They can engage in human-like conversations, providing coherent and contextually relevant responses.

In academia, LLMs have both positive and negative implications. On the one hand, their capabilities to simplify complex concepts, provide instant access to information, and support personalised learning are transformative. On the other hand, they challenge academic integrity, dilute critical thinking, and risk fostering dependence among students. This technology reshapes educational practices, particularly assessment methods and traditional evaluation frameworks. Our understanding of academic integrity, plagiarism, and educational outcomes are undergoing a massive overhaul, whether we want it or not.

Transformative power of AI

AI’s contribution to education is manifold. It acts as an enabler by enhancing accuracy in diagnostics and imaging to accelerate coding processes and advancing automation. Consider a student about to pursue her master’s degree in Germany and needs to learn the language. Applications like Duolingo assist in making the learning process more accessible and manageable. Learning apps like Duolingo use AI tools to interact with students, offer feedback, and adapt lesson plans to individual progress. Similarly, other educational platforms like Khan Academy employ AI to analyse student performance and customise lessons accordingly. Most importantly, they streamline administrative tasks such as automated grading and predictive analytics of student performance, easing the burden on teachers.

It also helps bridge the divide. A survey conducted at the University of Liverpool revealed diverse opinions on AI’s role in academia. Over half of the students supported AI for grammar assistance and simplifying complex concepts. Many also highlighted its role in levelling the playing field for non-native English speakers and students with disabilities.

Fragile side of AI

As AI continues to evolve, its innovative applications in education are expected to grow, offering immense potential for transforming learning experiences. However, like every coin, AI has its downsides. A primary concern is its impact on Academic Integrity due to AI-generated works (AI in AI).

Academic integrity is centred on honesty, trust, and fairness. AI challenges this core idea. Academic integrity is necessary to ensure that submitted work reflects a student’s understanding of the concept and material and emphasises learning over mere completion of assignments. Some misuse AI, bypassing genuine learning and creating a facade of competence. This undermines foundational skills and raises concerns about plagiarism and producing work that is not theirs to begin with. Academic integrity also means giving proper acknowledgement and citing the appropriate sources.

In fact, the content generated by AI often carries the risk of misinformation or bias, requiring critical evaluation and fact-checking that most students may not do. Moreover, AI’s black-box nature and its lack of transparency in decision-making further enhance the significant risk and doubts about the credibility of the work created. A notable case in 2023 involved two New York lawyers who used ChatGPT for legal research. The AI tool generated fictitious case citations, resulting in severe sanctions and fines. Such incidents highlight the importance of human oversight and critical evaluation while using AI in tasks demanding accuracy and credibility.

AI also creates an uneven playing field. The University of Liverpool survey highlights this. Students who choose not to use AI face significant disadvantages. Their efforts in developing writing and analytical skills are overshadowed by peers leveraging AI for similar outcomes with minimal effort. This creates an uneven playing field, undermining the meritocratic principle that rewards dedication and ability. Furthermore, unequal access to AI tools exacerbates this disparity, favoring those with access to technology.

Finding the balance

Integrating AI into academia is inevitable. There is no harm in using AI, but doing so responsibly and ethically is paramount. Rather than banning AI, institutions should develop clear policies on the extent of AI assistance allowed. For example, using Grammarly may not strictly be unethical work, but using someone else’s research work and churning the same content in different ways without referencing it would not be considered ethical. Some universities, such as Oulu University of Applied Sciences, have started giving their students and staff AI literacy training. Educating students on the ethical use of AI and its limitations is crucial to help them make informed decisions. Teachers and instructors should also educate students about potential AI usage and how to use it ethically.

Fostering open discussions and encouraging dialogue about AI’s potential and pitfalls will help the academic community understand and adapt better. Hence, responsible and ethical AI usage is the only way forward.