I observe and interact with humans between the ages of eighteen and twenty-five almost every day of my life, as part of my job. Like many, when AI chatbots were maliciously (indiscriminately is too kind of a word) released into the wild several years ago, I watched in horror—the kind of horror one might feel watching a pod of orcas play with a baby seal before ripping it to shreds. Only in this case the seal believes itself to be the one having all the fun. That young people were attracted to an openly available technological wonder is not surprising, though it was disconcerting to witness. What was truly shocking to me in that first year—and still is—was the almost complete lack of resistance by those charged with educating these young souls. Half-hearted appeals for deliberate discernment of the adoption of AI into higher education quickly dissipated and joined the chorus of administrators recommending “responsible use,” by this very phrase unintentionally treating AI as a harmful substance, yet one we can’t expect young people to abstain from using.
My own view from the beginning of this new era of machine domination has been that tools of such power require their users to have an extremely high degree of maturity and responsibility. But the whole growth model of corporate AI tools requires as much input and interaction as possible from as many users as possible. Thus, it came to be that perhaps the most powerful tool ever created by humanity fell into the untried hands of a generation known for, among other things, eating Tide pods. Not that all the blame for the irresponsible use of AI falls on younger generations. Aspirations to transgress the bounds of natural limitations have plagued humanity from the beginning. “Come, let us build for ourselves a city, and a tower with its top in the heavens, and let us make a name for ourselves,” Babel’s vision-casting team said. The builders and sellers of AI—futurists, pioneers, entrepreneurs, and engineers, along with their funders—will be the most culpable for harm caused, especially if products are designed to entice and addict.
In the many conversations I have had on this topic over the past three years, most of my interlocutors have taken either a neutral or a positive stance toward AI, though of late I have seen some who were previously unconcerned begin to grow wary. I find that the majority of those I speak with on the topic, however, have given little thought to the potential downsides of AI. Few have made the effort to subject their use of AI to moral scrutiny, and even fewer have dared to ask whether any and all use of AI may implicate the user in some moral evil. Yet there are a multitude of reasons not merely to approach AI with caution but to engage in determined opposition to it. In what follows I offer a handful of what I take as the weightier among these reasons, and I invite those who read this to consider joining the resistance.
The most common reactions I hear in response to my rejection of AI have to do with all the real or imagined goods AI might help bring about: medical advancement, all manner of scientific research, eliminating dull and time-consuming tasks in human work, and so on. Virtually everyone acknowledges the risks of AI and the ways it can be used for harm rather than good, but these are taken as the price that must be paid to attain the good. AI is simply a tool, they say, and like all tools it can be used for good or for ill. It would be ludicrous to get rid of the tool just because some bad actors abuse it. This line of thought fails in at least two ways.
First of all, it fails to recognize a distinction between moral evil and natural evil. Natural evils are things we deem bad—usually due to the loss of life—but for which we are not culpable: natural disasters, diseases, etc. Moral evils are things we deem bad and hold people responsible for. Moral evils are always more grave for the souls involved than natural evils, even if the scale seems smaller. AI may help attenuate natural evils by finding cures for diseases or drastically improving disaster prediction, for example. But the capability and availability of AI have already drastically increased moral evil. This is not a net gain for humanity. To put it bluntly, it is better to suffer natural evils and possess virtue than to eradicate natural evils and lack virtue. Ends never justify means.
Second, although AI can be thought of as a tool, it is unlike any other tool due to its ability to “self-improve” based on human input and interaction along with ever-expanding data sets. If a person uses a hammer to smash the skull of an innocent other, this is clearly a case of a tool being misused. But the hammer undergoes no change; through the act of someone using it to commit murder, the hammer itself does not become a better weapon. With AI this is not the case. As more and more people use AI tools, the tools themselves change and get better at doing what their users ask. Which means even supposedly harmless uses of AI actively contribute to the tools becoming better at bringing ill will to fruition. Millions of users, for example, creating relatively innocuous fake videos for work or for fun contribute to the tool becoming better at making realistic child pornography. Again, this is not a tradeoff we should be willing to tolerate.
In addition to these moral questions pertaining to the very existence and use of AI, there are moral concerns pertaining to the peripheries of the AI complex. A prime example of this is the investments being poured into creating an AI-integrated world. Hundreds of billions of dollars have already been spent on developing and deploying AI tools, and there are only signs that this will increase in the near future. If achieving an AI-integrated world were a matter of meeting critical needs, perhaps such expenditures would be justified. But there are no critical needs in the world that can only be addressed with AI. Poverty, violence, famine, loneliness, lack of healthcare, environmental collapse—these are needs which, in order to be met, require human and financial resources, not AI. Because these critical needs have not been met, it is unjust to direct such vast resources toward something which, though technologically revolutionary, is quite frivolous in relation to the realities of human existence. We do not need AI to flourish as humans. We do need peace, healthy communities, clean water, fertile soil, and a dependable food supply, all of which AI cannot supply.
The environmental cost of AI is something to consider as well, working on the assumption that keeping this planet perpetually fruitful and safe to inhabit is important for humanity. Massive data centers are required to keep AI running, along with cloud computing, streaming, and a host of other digital commodities, and each data center requires a massive amount of electrical and water supply, not to mention actual land usage. Data centers collectively are becoming one of the leading emitters of carbon through electrical consumption, and newer “hyperscale” centers are expected to require more water than current ones. On a planet that was already facing environmental crises before the rise of AI, surely AI’s energy, water, mined material, and land demands are going to make it even more difficult to address these crises, especially since AI seems to have quickly been deemed a necessity by corporate and political powers. Care for our common home ought to at least cause us to question whether this is the best path forward for humanity.
There are a host of other reasons to reject the use of AI outright, such as the minor issue of a possible machine-assisted human extinction event, but I will offer one final thought. The use of AI degrades the value of human work and so of humans themselves. Those who tout the benefits of AI, like most advocates of machine technology, view natural human limitations as weaknesses to be transcended if possible. This attitude is of course a derivative of an economic order built on hyper consumption. In both production and consumption, more and faster is always better. Since the introduction of machines greatly enhances production, the goodness of machines is not to be questioned. But the unmentioned corollary of this is that the productive value of a mere human without a machine is reduced to almost nothing. Being human is not enough. You and I, considered in ourselves, are of insufficient value in the minds of those pushing for an AI-integrated world, unless we amend our humanity with machines.
The effect of this in the agricultural realm is widely recognized. Even though the only things required to make land productive are human care, human or animal power, and water, a farm without machines is a near impossibility today. And the idea of farm work being done without machines is seen as something rightly relegated to some past peasantry. This same degradation of work done at an actual human scale by actual humans will come to every sector that adopts AI. Productivity will go up, value and diversity will go down, and the monoculture dullness that one sees flying over the machine-leveled fields of middle America, with its ecological and cultural costs, will characterize much of society and the economy. Yes, you will be able to escape from this ugliness into virtual worlds. But I, for one, would prefer to enjoy the beauty of the real world—the beauty that is the ordered harmony of great variety; the beauty that must be cultivated, preserved, and fought for; the beauty that can only exist when natural limits are respected; the beauty that came under threat after the industrial revolution; the beauty that could very well disappear after the AI revolution. Unless we resist.







