Guest column: University of Tennessee “embraces” artificial intelligence, downplays dangers
By David Barber, professor
At the end of February, the University of Tennessee Board of Trustees adopted its first artificial intelligence policy with little attempt at engaging faculty and students in any serious discussions of the many issues and problems surrounding AI.
Likewise, the UT Martin Faculty Senate approved the Board’s policy statement in late April, also without engaging the campus community in a meaningful discussion of the issues.
In Section V of the document, titled “Policy Statement and Guiding Principles,” the first subsection reads: “UT Martin embraces the use of AI as a powerful tool for the purpose of enhancing human learning, creativity, analysis, and innovation within the academic context.”
While the document as a whole touches on potential problems such as academic integrity, intellectual property rights, and the security of protected university data, nowhere do we find mention of the potential dangers and negative consequences of AI’s rapid growth and proliferation. Faculty members have already seen these consequences first-hand: the limiting of “human learning, creativity, analysis and innovation” as quoted in the policy.
Those of us who teach in the humanities have witnessed the increasingly widespread student use of AI, even on low-stakes assignments. For example, AI allows students to bypass the effort of trying to understand a reading. If students attempt a difficult text and struggle to make sense of it, they can ask AI to explain. More often, however, students skip reading altogether and ask AI for a summary, analysis or other grade-directed answers.
In approaching a novel, a historical narrative or even the social realities of our own time, readers start with limited knowledge of the characters, events or forces at play. To understand a character’s motives, the relationship between events, or the social, economic and political interests driving them, we must construct and refine a mental image—a hypothesis—through careful reading.
This process is the heart of education. Only by grappling with a text, a formula or a method for solving a problem do we truly learn. Without that effort, students may arrive at the “right” answer, but they have not gained the tools to understand the problems they face—or to live morally and intelligently in the world.
As complex as a novel or historical narrative may be, the real world is far more complex. If we rely on AI’s interpretation instead of building our own understanding, we deprive ourselves of the skills needed to engage with that complexity.
And it gets worse. Because AI is not an objective source of knowledge, but is structured by and for particular interests. Research has demonstrated how AI systems actually embed the biases and interests of their creators, from perpetuating racial discrimination in search results to reinforcing existing inequalities in algorithms. Rather than being impartial, these technologies often serve to maintain current power structures while appearing technologically neutral.
For example, recently someone asked Elon Musk’s AI, Grok, whether the left or the right was more disposed to the use of violence. Based on the evidence that it had accumulated off the internet, Grok answered, correctly, that the right has been far more guilty of political violence than the left. Musk was enraged and blamed Grok’s answer on its over-reliance on what he termed, “the legacy media.” He promised to “correct” Grok, i.e., alter its programming to weigh its source materials differently. Grok then began calling itself “MechaHitler” while spouting overtly antisemitic propaganda. In addition to deadening our ability to think critically, AI will simply spit out what the tech billionaires running it would have us believe.
Our university’s mission statement reads: “The University of Tennessee at Martin educates and
engages responsible citizens to lead and serve in a diverse world.” Yet, we fail that mission in dozens of ways. Our students by and large do not follow the news on a regular basis and are largely unaware of the great issues of the day. Few leave the university with a love of reading, despite its importance to responsible citizenship and understanding of contemporary issues.
With this new AI policy, the university risks compounding these failures by embracing a technology that may further erode students’ ability to think critically about the world around them.
David Barber is a history professor at the University of Tennessee at Martin. His email is dbarber@utm.edu.

