Introduction
This is a blog post with a feeling of loss and, at the same time, discontent on my part, as studying and learning are qualities I deeply value, and seeing people using LLMs in a mundane and disproportionate way causes me aversion and disgust.
For many years, I have always been concerned that education has been undergoing a transformation, given that many universities nowadays have simply shifted towards producing executors rather than thinkers.
I think this is largely due to industrialization over the last 4–5 decades... I can use my parents as an example: in their time, it was still quite unlikely that someone would attend university (they are about 75 years old today). This changed over the decades, and we reached the point where, to perform many activities in the industry, formal education through a university became almost mandatory. This transformed universities from producers of thinkers to producers of executors, given the "urgency" that industry forced society to have in producing highly educated individuals.
Now, this has its advantages, as today more and more people NEED access to formal, high-level education, but it also brought disadvantages, because this rapid necessity turned some schools into mere "meat grinders."
Basically, over the decades, we've industrialized education.
And how does this relate to the use of LLMs, you might ask?
There's no definitive answer to this, only my mere impression that the excessive, uncritical use of LLMs at all levels will, in fact, further accelerate this industrialization while simultaneously worsening the formation of thinkers who should be nurtured in universities.
In this post, I will try to provide an overview of my perspective and possibly good uses we can make of LLMs in specific situations. Additionally, I will draw a parallel to the model where teacher-student assume a mentor-mentee role rather than a supervisor-supervised one, something that, in some cases, seems to have been lost.
Let's explore a bit, as the topic is broad, covering the following areas:
- Use of LLMs by teachers
- Use of LLMs by students
- Future impacts
Use of LLMs by Teachers
At first glance, the use of LLMs by teachers seems very sensible, just as LLMs are marketed as productivity-boosting tools for employees. Teachers are employees who could use them to increase productivity in content creation. Perhaps the main point would be generating content for students.
But here we fall into the trap and problem raised earlier: what is the purpose of the teacher? To be a mentor or a supervisor of the student? Does the teacher have the role of merely transmitting information, or of provoking, encouraging, and providing the guidance necessary for the student to grow in knowledge and critical thinking?
Historically, the teacher was the holder of knowledge, but today, with access to the internet and various other materials, that is no longer necessarily true.
Could we define the teacher's role as:
- Serving as a role model for the student
- Stimulating learning
- Mediator / Curator of knowledge
Here, if I may, I want to separate teachers into two types, given the historical pressures society has imposed over the decades (and society here implies industry):
- Engaged teachers
- "Assembly line" teachers
Apologies to any teachers who happen to read this post and identify with the second type. If this is your case, you may need to rethink your approach and the way you are embedded in the context of your university.
I'll start with the second type because I believe this is where students complain the most, and where teachers are merely transmitters of information. For them, perhaps LLMs are the missing piece to increase productivity and reduce manual work.
But what would productivity mean for these teachers, who, unfortunately, have been "infected" by their universities, which in turn were "infected" by industry (at least I hope)?
More graduates, more students entering and being accepted by industry as valid employees, higher productivity. Perhaps that last part isn't even relevant to productivity, right? But a university that doesn't produce "good" employees ends up having fewer students. Therefore, if I have tools that help me produce more content that students can "learn" quickly, my "productivity" increases, and manual effort decreases.
I am not a university professor; the closest I've come is giving open trainings. So feel free, reader, to disqualify me however you see fit. But even in these open trainings, the idea persists: the more, the better.
I want to draw a parallel here: for "assembly line" teachers, LLMs can help quickly update their materials. A constant complaint from students "infected" by industry, or those whose need for a job (to survive) requires a degree, is that their university is teaching something no longer used (an outdated technology version or any other subject irrelevant to their chosen industry). I make a big parallel here with IT (because of my bias, of course), where each year we graduate people with "obsolete" knowledge compared to industry standards (and here we have another big problem).
Probably, considering only the "assembly line," there are uses for LLMs like writing complete lesson plans and generating exercise lists. But if a teacher blindly asks an LLM to do this without evaluating its output, then the question arises: what is their role then?
That's why I want to address engaged teachers, those who are not just standing in the classroom writing on the blackboard, but who value the knowledge imparted, and consider how each student individually understands their subject, assisting those whose comprehension has not yet taken root. I'm choosing elegant words here to convey that being a mentor isn't just dumping content, but caring about how that content is absorbed.
I recognize the difficulty of being an excellent mentor when you have 40 students (or more) to focus on and evaluate, in addition to all other duties, like publishing articles. Remember what I mentioned above about the assembly line: more is better. Industry pushes teachers toward this as well, and this struggle is lost year by year.
Maybe, and this is a big maybe, for engaged teachers, LLMs could assist in creating personalized study plans, considering personality, understanding, and goals for each student. Critical thinking and individual evaluation of each result are essential. In large classes, creating targeted material is more difficult, and therefore more complex. LLMs could assist in parts, but caution is needed.
My Take: Teachers can use LLMs, but their use must be extremely critical and limited to a few activities, and one should never blindly delegate content creation to the tool without evaluation (as LLMs can hallucinate).
Use of LLMs by Students
I'll start by saying: you as a student should not use LLMs for any activity, and you as a teacher should not encourage their use for any activity. Take this as a rule. However, like any rule, exceptions exist, or, in my case, contradictions and hypocrisy, and I'll admit that in some timely situations, with careful critical thinking, they may be used.
The role a student takes on is to learn something, a subject or an activity, and may therefore need a mentor.
Much has been said about replacing teachers with LLMs, and I won't even go into that, as I understand you could replace a teacher with an LLM the same way you replace a teacher with books or other materials for self-guided study (given the immense limitations LLMs carry, like the infamous hallucinations). I haven't seen much discussions about the possibilities or problems for a student wishing to go down this path might have.
At this point, as I mentioned, students should refrain from using LLMs to guide their study and instead use the material the teacher provides. Considering everything I've discussed before, I imagine the teacher, as curator of information, will do an excellent job and deliver material of high quality.
Therefore, students should not use LLMs to study any content, and if they happen to use them to expand their vocabulary or gather information on a topic, they should be careful not to suffer from hallucinations (because they will occur).
Another extremely important point is don't use LLMs to solve the exercises and problems assigned by the teacher for understanding and reinforcing the subject at hand.. I shouldn't even have to say how crucial it is for students to actually perform the activity, right? A recent study indicates that using AI to perform tasks humans can already do leads to cognitive decline; imagine not even learning the topic being studied.
"Over time, the group showed a consistent decline in engagement, performance, and self-reported satisfaction."
My Take: If students are allowed to use LLMs, then to prevent abuse, it would be wise for teachers to create guidelines for use, explaining what is appropriate and possible without harming learning and understanding. These guidelines should also be shared with students' parents, so they can judge whether their children are using them correctly.
Future Impacts
I believe many problems caused by LLMs have already been cited elsewhere, so I'll just list them here and discuss those I find relevant for education.
Dependence and Addiction
Perhaps one of the most harmful for students, and if uncontrolled, it will surely lead to a drastic decrease in society's knowledge and cognitive capacity. Here, everyone loses: society loses individuals with less critical thinking; but some gain, such as those who manipulate it to gain power. So, who benefits from people becoming addicted to these and other technologies?
Job Market
The absurd idea that "AI won't replace you, but someone who knows how to use AI will" can pressure students and others into using these tools unnecessarily, creating a race to learn or use something that may have no relevance to their end goal.
Be warned: one of the worst evils is when the job market forces us to learn specific knowledge simply because it is deemed valid, advantageous, or profitable.
Content Normalization
The more we use LLMs, the greater the chance content becomes uniform, repetitive, monotonous, and devoid of personal voice. I wrote a blog post discussing the issue of society shaping us to be alike. Overusing or delegating everything to LLMs is essentially the same: it removes your individual ideas from society, leaving it standardized and homogeneous.
Dependence on Companies
One of the worst factors that can exist: relying solely and exclusively on companies, especially if we don’t have open and low-cost alternatives. Not to mention all the privacy and security issues that excessive use of LLMs will bring.
Therefore, avoid sharing any sensitive, confidential, or private information with any LLM (except those you run locally and fully control).
Considerations
The use of LLMs should be highly discouraged for everyone in education, especially students.
If the college/university chooses to use them, and/or the teachers do as well, it would be prudent to have guides or best-practice documents precisely to prevent excessive, unnecessary, or even illicit use.
I notice that unnecessary use is increasing, and the industry seems to encourage this with little challenges like "what can’t LLMs do." Millions of users asking ChatGPT what 2+2 is or how many "r"s are in "strawberry." Completely unnecessary uses that demonstrate the previously stated and affirmed fact: LLMs do not have knowledge and do not reason (they lack the cognition to differentiate logic from mathematics).
In this way, we continue to encourage the use of these tools, which for some are a salvation and for others the end of our civilization, but in the end will simply come to dominate us like so many other technologies and algorithms (as social media did over the past decade).
Prof. Adolfo recently translated an open letter (Open Letter: Stop Uncritical Adoption of AI Technologies in Academia) from a university in the Netherlands, advising against LLM use.
Finally, I leave a last excerpt from Microsoft researchers on even moderate LLM use:
"[...] it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving. Higher confidence in GenAI's ability to perform a task is related to less critical thinking effort. When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship." The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers
See you next time 😀