What is the role of an ethicist? Is it to be an impartial observer? A guide to what is good or bad? An agent of change? Here, I will explore the different roles in the context of AI ethics through the terms descriptive, normative, and action AI ethics.
AI ethics is a specific field of applied ethics nested in technology ethics and computer ethics.30 Applied ethics is not primarily concerned with answering questions around what ethics is or the philosophical foundations of ethics, but rather tends to take existing work from other areas of ethics and apply them to a particular field.34
An AI ethicist might seek to objectively observe and describe the ethical aspects related to the development and deployment of AI. For example, I might be interested in trying to understand how the X platform changes and shapes political processes, individual opinion formation, or people’s well-being, and then simply describe my findings as objective facts. This would be a case of doing descriptive AI ethics.5
Alternatively, I could take a more central position by incorporating my evaluations of the desirability of certain features and policies of X in my research. This approach would entail me advocating the goodness or badness of X in various respects, thereby engaging in normative AI ethics.
A natural question for those doing normative AI ethics would then be: To what extent—if any—are they responsible for acting on their own findings and convictions? More specifically, can an AI ethicist who finds, for example, X to be deeply problematic continue to use the platform, and through this in various ways support and strengthen it? This is what has been called the AI ethicist’s dilemma.27,28 It explores how someone who sees harms stemming from a technology can easily become directly responsible for or complicit in those very harms unless they take action, which is often both costly and difficult.
Beyond posing a dilemma, however, this situation also allows us to posit a third form of applied ethics: action AI ethics. This form extends normative AI ethics by positing a responsibility for ethicists to act on their research findings to achieve positive change. In the case of X, action AI ethics would entail taking action aimed at reducing the harms caused by the platform. This might require actively undermining the platform or trying to engage with it to change it for the better.
In this article, I review the history and major issues associated with the X platform. Then, using X as an example, I discuss the challenges of descriptive and normative AI ethics and examine what action AI ethics might entail. Finally, I offer some practical considerations for those wanting to practice this form of ethics.
The Case of X
To understand the case I will refer to throughout this article, let’s do a brief history of X, the platform formerly known as Twitter. Founded in 2006, Twitter had a 17-year run, during which it became a major global social media platform. Renamed X in 2023, in 2024 it had approximately 238 million daily active users. This was significantly lower than the 2022 numbers, as the platform is reported to have lost a large number of human users and activity since Elon Musk purchased it in 2022.11,13 (There might, however, be more bots there now than there were previously.25)
After Musk took over Twitter, a wide range of changes have occurred, including the firing of most of the staff working on trust and safety3 and election integrity.14 The moderation of tweets has also been drastically reduced and changed. In July 2024, the European Commission stated that X was in breach of transparency and accountability measures described in its new Digital Services Act regulation. It mentioned the practices of preferential access and increased visibility for so-called blue checks, lack of researcher access to data, and lack of transparency around advertising.8
Elon Musk himself is (hyper) active on the platform, posting and reposting content that is deeply concerning to many, such as hints that the Secret Service intentionally let the assassination attempt on Trump in July 2024 happen.7 He has also gotten a lot of criticism for supporting different sorts of users, posts, and theories related to antisemitism,23 neo-Nazism,12 the alt-right,18 and a wide range of conspiracy theories. In July 2024, he stated that he ideologically supports Donald Trump, and that he would fund his campaign in the 2024 election.35 This has led some to question the political neutrality of the platform, despite Musk’s initial claims that it would remain so.18 A further interesting fact is that Musk’s previous partner at PayPal, Peter Thiel, has funded and heavily supported Trump’s vice president, J.D. Vance.16 Other influential tech entrepreneurs, such as Marc Andreesen and Ben Horowitz, are now also routinely linked to Vance’s rise and general support for Trump, raising further concerns that powerful people in tech are willing and able to use their resources to gain political influence.32
There are several key issues that animate debates over X. How one perceives and evaluates these issues, however, will vary from person to person—and thus, from ethicist to ethicist. Going forward, however, let us assume that we are speaking of ethicists who are concerned with, for example, increased leniency on hate speech (for example, targeting LGBTQ+ people) and the growth of radical, right-wing conservative politics on the platform, including support for what many perceive to be authoritarian and anti-democratic policies associated with Trump. Some might be concerned about the proliferation of bots and the increased potential of the platform being used for disinformation campaigns in election times. Others are wary of the autocratic rule of X, which is now in the hands of the world’s richest man—who also actively uses the platform to pump his other companies.9
In sum, there are many potential concerns, related to the fundamental implications of the platform itself, recent changes in how the platform works, and who owns, controls, and benefits from the platform now.
The Illusion of Objectivity
We will now take a look at descriptive AI ethics and see why it is problematic. While some claim to do positive or purely descriptive research—research that is objective in the sense of being detached from both the researcher and the phenomenon being researched—this objectivity is illusory.
For example, I might claim that I am an objective AI ethicist. I do descriptive AI ethics and only try to identify and reveal objective facts related to the ethically relevant issues of X—I describe the world as it is, without any thought of how it ought to be. But there are at least two serious problems with this stance.
First, I might have to engage with the platform to do my research. For example, as X has cut off researcher access to data,1 it might be necessary to rely on data-gathering methods that involve having a user account and taking part in the platform’s activity. I, as a researcher, thus become part of the phenomenon I am researching. I will be an active entity in my research object, and through this participation will both influence and support the platform. I do so by, for example, increasing the user numbers and driving platform activity through my interactions, thus legitimizing the platform through my presence.
Second, even if I could get a hold of historical data without being a user of X, I would still not be able to distance myself fully from my research. As I publish my findings—purportedly objective findings related to the ethical implications of X—this will change people’s perceptions of and behaviors on the service and thus also the service itself. It is therefore impossible for published research to be divorced from its impacts on the world. For example, X might adjust its policies and actions if the company knows a research project is happening, and users and others might change their behavior on or toward the platform based on published research findings.
Further, there are several kinds of objectivity, notably ontological, mechanical, and aperspectival.6 The first concerns the search for the fundamental nature of reality and thus is not important for this discussion. More relevant are mechanical objectivity, which seeks to remove interpretation from scientific reporting, and aperspectival objectivity, which concerns the removal of individual or group idiosyncrasies.6 However, full objectivity is impossible because the researcher is embedded in the phenomena and world they are researching and all theory inescapably has a normative foundation.36
These considerations suggest there are at least two reasons why objectivity in AI ethics is illusory. Nevertheless, some might, of course, try—or claim—to be doing objective descriptive AI ethics. However, at most they can seek to provide a description of ethical implications, positive and negative, without an accompanying evaluation of whether these are good or bad. But even this falls short of meaningful objectivity. The researcher is based in a certain tradition and looks at certain data; therefore, values and assumptions will always be a part of such a research project. Pure, neutral, descriptive theory is consequently impossible.26,36
Normative AI Ethics and Walking the Walk
Moving on to normative AI ethics, this entails gathering and analyzing information about a phenomenon, such as X, while also including one’s evaluations of its implications in terms of goodness or badness. For example, one could provide a description of how X changes political dialogue in a certain community. Beyond just describing such changes, the ethicist will also explicitly evaluate whether and why these changes are problematic or something to be desired.
This form of normative AI ethics is quite straightforward. It is easy to do and does not pretend to be objective. It is, of course, important that normative AI ethics researchers explicitly declare and describe their assumptions and adherence to ethical theories. For example, did they rely on consequentialism or virtue ethics? And what is the background and positionality of the researcher themselves? If these things are stated clearly enough, normative AI ethics can be relatively unproblematic.
Another question that naturally follows is what, if anything, is the researcher’s role in what comes after the identification and evaluation of a problem? That is, what is the responsibility of a person doing normative AI ethics to actually try to contribute to desired change? Does the researcher have an obligation or a duty to act on their own findings? For example, if they find out that being a part of X and having a user account and driving traffic there supports the platform in ways that cause harm, is it justifiable for the ethicist to continue being there? If so, the researcher will risk being accused of hypocrisy. I myself have stated that I am a hypocrite. After all, I wrote about the AI ethicist’s dilemma27 and continued to use X until mid-2024. This might, of course, be justified. Researchers might say that these platforms are bad, but that they don’t see themselves as agents of change. They can see themselves as having no responsibility beyond just communicating their opinions on what is right or wrong. Or, as I’ll return to, they might feel they must use the platform, or that their use is justified in other ways.
This touches on problems of complicity.15 If the researcher stays on the platform, they might be contributing to the harms they have already identified as problematic27—not necessarily directly or in ways that imply they personally played a sufficient role in bringing about the harms, but rather through their indirect support of the collective action that brings them about.15
What could be the reasons for not accepting the role as an agent of change? There are both philosophical and pragmatic justifications. For example, if we adhere to a moral philosophy based on ethical solipsm—a highly self-centered and individualist form of ethics—we will be blind to many of the important relational and positional factors that affect whether individuals can be held accountable for collective harms.15 Overly individualized theories of accountability obscure how individuals can be held responsible for the harms caused through X. Christopher Kutz15 explains this by highlighting how these theories tend to require the satisfaction of three criteria to assign accountability: individual difference, individual control, and individual autonomy.
To illustrate, some X critics might, in theory, want the platform to be gone. They might want others to do something to change the platform or outlaw it, but they will not see themselves as having any responsibilities, beyond perhaps communicating their convictions. With an overly individualized account like this, it is easy to understate one’s contribution to harms (difference), one’s inability to control or prevent the harms (control), or one’s accountability for another agent’s actions (autonomy).15
More mundanely, the researcher might simply be selfish. They could, for example, earn more money by promoting their podcast on X. Even if they think the platform is bad—at least for some people—it is good for them personally to remain there. Here, you might object that it is easy from a position of privilege to accuse others of principled or unprincipled selfishness. I am tenured, with a relatively safe position at a decent university in a safe country. Others, you might say, are not as free to softly rebel against power and bear the costs associated with openly criticizing or fighting big tech companies and others with power. Granted, but the reason for their non-action will still be pragmatic and self-interested, and often not philosophically based in a set of clear moral principles. It is both common and easy to seek ethical justifications for selfishness. In the words of John Kenneth Galbraith, describing modern conservatives, he stated in 1964 that those who do are engaged “in one of man’s oldest, best financed, most applauded, and, on the whole, least successful exercises in moral philosophy. That is the search for a superior moral justification for selfishness.”a
One group of users that often seems to turn a blind eye to the harms caused by social media to pursue their self interest is politicians. Many of them are super-active X users who provide the platform with crucial lifeblood. For example, in July 2024, Joe Biden chose X to first communicate his withdrawal from the 2024 presidential election. He could have used another platform, perhaps publishing his letter first on a website or mailing list, or reading it live in a press conference. However, he chose to do it on X, despite Musk just weeks earlier vowing to support the Republican candidate, Donald Trump. This choice meant that newspapers around the world then referred to this X post, creating the the impression that X is where news is broken. This, again, helped Musk monetize the platform to generate the funds he used to support Trump in the 2024 election. Such choices are particularly interesting, as politicians might be perceived as having a special obligation to do the right thing—to be good role models, to practice what they preach. At the same time, they need to get their message out to their target groups. Deals with the devil have been made for less.
Returning to academics, two illustrative examples are Paris Marx and Evgeny Morozov, who at the time of this writing were both still on X. Both are prominent critics of the power of big tech and its ideologies, as well as the effects of Silicon Valley tech products. Still, both heavily promote their work on X. Marx promotes his podcast Tech Won’t Save Us, which, ironically, tends to be about the evils of the tech industry, including Tesla and Musk. For example, in a story in Time Magazine, he portrays Musk as an “unaccountable and increasingly hostile billionaire” we need to look beyond.17 Morozov has written extensively on the dangers and shortcomings of the ideology and power of people like Musk.19 Yet he continues to share his anti-capitalist and anti-Silicon Valley articles and research on a platform owned by an obvious target of his research.
In contrast, prominent big-tech critic Shoshana Zuboff publicly criticized Musk’s perceptions of social media, posting her last tweet on April 15, 2022.b She does still keep an account, though. Many others refrain from having accounts at all—largely leaving us to guess what their motives and reasoning are. Still others might keep their accounts without being active, perhaps to preserve some of the history they’ve made through Twitter,c or due to the psychological discomfort caused by the prospect of deleting often decades-worth of posts and followers.
The AI Ethicist’s Duties and Obligations
The move from normative to action ethics can be framed in light of a potential obligation or duty to act. I will thus examine in some detail the potential duty or obligation for ethicists to actively seek and contribute to positive change, whether through individual action, collective action, or efforts to make others perceive the challenges the same way as themselves. Before embarking on such an exploration, though, a few words on the meaning of duties and obligations are in order.
I base this brief discussion largely on R.B. Brandt’s analysis2 of the two terms. There is a tendency to use the two concepts interchangeably, but Brandt argues that they are different enough to warrant distinction. Having a duty or obligation generally means that someone has a right to expect something from you. It is often coupled with moral to imply that we have certain moral duties or obligations, but they can also be professional, legal, and so on.
One way to distinguish the two is to see duties as something more often connected to roles, positions, and offices—something not necessarily or only connected to morality but also supported by institutional mechanisms. Obligations are more often based on a feeling of owing something to someone. A moral obligation may thus go beyond—and conflict with—our formal duties.
For example, an AI ethicist employed in academia is bound by certain implicit and explicit contracts and agreements to act in a certain way. Their role or position comes with certain duties. Duties to actually do some work, but also to do it in a particular manner—to uphold academic integrity in their research, for example, and to abide by a range of research and professional ethics. Thus, our employers, governments, or professional groups might assign or impose certain duties on us. Of course, we are also part of these entities and often in a position to influence these duties.
Obligation indicates something slightly different—something that we personally must accept for it to come into force; something related to our personal conscience, not mainly our roles or positions. Social and moral obligations are generated through relations and interactions. While my government can create various duties related to my role as a citizen, obligation arises through me accepting or recognizing the rights of others to demand something of me, based on my implicit or explicit actions or inactions to create such demands.
This all might sound like unnecessary wordplay but, as will become clear later, there is a reason to distinguish externally imposed and internally accepted expectations on oneself. For example, I believe that it makes sense to say that an ethicist has obligations that go beyond their duties.
Obligation in Arne Næss’s environmental ethics. An ethical theory that posits a duty to act is Arne Næss’s Ecosophy T,20 a specific kind of deep ecology that aligns fairly well with the action-oriented ethics described above. Næss’s philosophy is based on eight premises, seven of which are a mix of descriptive and normative statements:
-
The flourishing of human and non-human life on Earth has intrinsic value. The value of non-human life forms is independent of the usefulness these may have for narrow human purposes.
-
Richness and diversity of life forms are values in themselves and contribute to the flourishing of human and non-human life on Earth.
-
Humans have no right to reduce the richness and diversity except to satisfy vital needs.
-
Present human interference with the non-human world is excessive, and the situation is rapidly worsening.
-
The flourishing of human life and cultures is compatible with a substantial decrease of the human population. The flourishing of non-human life requires such a decrease.
-
Significant change of life conditions for the better requires change in policies. These affect basic economic, technological, and ideological structures.
-
The ideological change is mainly that of appreciating life quality (dwelling in situations with intrinsic value) rather than adhering to a high standard of living. There will be a profound awareness of the difference between big and great.
But the most interesting premise is the eighth one, in which Næss describes an obligation to seek change:
-
Those who subscribe to the foregoing points have an obligation directly or indirectly to participate in the attempt to implement the necessary changes.
The exact nature of this obligation is not specified, and Næss states that there will naturally be disagreements about what sort of action to take, what types of strategies are appropriate, and which challenges should be prioritized. However, he does not see such disagreements as a barrier to “vigorous cooperation.”20 Næss’s obligation does in part stem from the theoretical underpinnings of his philosophy, in particular his suggestion that we are all part of an interwoven network of life in which we are merely nodes in a “total-field.” For Næss, realizing this—through what he calls self-realization—means that we must assume responsibility for our conduct toward other beings. Also, humans have greater understanding of the nature of all things than other beings, which gives rise to a duty for humans to act morally—a duty that will not apply to, for example, wolves and other animals lacking the necessary knowledge and capacities.20
For the AI ethicist, an argument could be made that their training and background equips them with a) the capacities that constitute a duty to effect change, while their research activities provide b) the knowledge of challenges that further strengthens this duty. Finally, c) for the obligation to become real, they must evaluate the situation in a specific way and have a moral conviction that allows them to conceive of themselves as obligated to act.
Næss discusses researchers’ duties and obligations at length elsewhere, suggesting, for example, that those working on issues related to environmental science and ethics should contribute to the changes required to face environmental challenges.21 Of interest to the topic at hand, he specifies that such expectations also apply to researchers outside academia. He claims that it is unworthy for a researcher to accept to be a “tool, a functionary, a technician,” and that the researcher has a responsibility for how their employers use the results of their research.21 This is thus something beyond a professional duty. For those who accept Næss’s arguments, it is a personal moral obligation.
The idea that we have a duty or obligation to seek positive change is clearly not new, but most theories seem to focus on our general responsibilities as human beings, and not whether ethicists in particular have a responsibility to act. Furthermore, much of moral philosophy has been too focused on individuals, with the result that harms resulting from collective action have gotten insufficient attention. Understanding the intricacies of the attribution of praise and blame when individuals act together is difficult, but also crucial for understanding most real cases, in which individuals are rarely, if ever, acting alone or in isolation. In such cases, one might question whether the notion of ethics even applies.
Action AI ethics and the three mainstream ethical theories. The three major ethical theories—virtue ethics, deontology, and consequentialism—can all be used to illustrate the potential obligation of AI ethicists to do action AI ethics. They also all have implications for collective action. In the following, I summarize these theories (discussed in Sætra et al.27) and the key takeaways relevant to ethicists’ obligations.
Virtue ethics entails evaluating right and wrong actions based on what a person with virtue would do, and what would contribute to becoming more virtuous. If, for example, kindness, courage, honesty, and integrity are virtues, it seems quite straightforward to argue that acting in ways that minimize harms to others, even if this comes with certain personal costs, is a good thing. A courageous person with integrity would, for example, not actively contribute to the success of X if they are aware of harms to others resulting from such a contribution. It might thus be said that anyone who believes that courage and integrity are virtues, who believes that fostering and embodying these virtues is the right thing to do and believes that supporting X as a platform goes against these virtues, has a moral obligation to actively undermine the platform. However, such considerations also require us to account for legitimate disagreements regarding the effects of different strategies and priorities, as emphasized by Næss. This takes us to the need to evaluate consequences.
Consequentialism is a theory in which the rightness and wrongness of actions are evaluated based on their consequences. In brief, this theory allows us to legitimize seemingly wrong, evil, or bad actions if the overall outcome is good: The ends might justify the means. For the AI ethicist critical of X, then, the key question would be whether them staying on the platform, for example to gain more reach, will more effectively contribute to the desired changes than them abandoning the platform. If an ethicist accepts that consequences are the determinant of right and wrong, the ethicist who believes the world would be better without X will be morally obliged to act in ways that maximize their contribution to such an outcome. Not at all costs, of course—individuals will have to do their own calculus on overall consequences related to the costs to themselves, the costs to others, the costs of civil or legal disobedience, and so on. One potential problem with a duty to act based solely on instrumental considerations is that these are far detached from much of the moral life and senses of duty, right, and wrong as lived and experienced by real people. Such instrumental foundations of duty therefore might not be effective in giving rise to desirable behavior.15
Deontology often involves evaluating actions based on whether they can be turned into universal maxims or rules. For example, Næss’s eighth principle is a formulation of a universal rule that might make sense to some. Why would one object to such a rule? Some might, for example, adhere to a very different ethical theory—say, Ayn Rand’s objectivism22—and will argue that individuals have no duty to act in ways that promote the interests of others if this is harmful to themselves. By such a standard, the only acceptable universal rules would be those that are clearly in every individual’s best interest. Immanuel Kant is perhaps the most famous proponent of a deontologist ethic, and one of his key principles is that it is never acceptable to use other individuals simply as means to some other goal.24 Such a principle would entail constraints for AI ethicists who know that their actions involve harm to some individuals, even if they believe they will overall be able to help more people through engaging with instead of leaving X. The deontologist identifying with such principles would then have a moral obligation to leave X to ensure that they do not violate the principle and turn unknown others into instruments for their own—or others’—purposes.
These ethical theories all have much more nuance and many different directions not covered here. However, this brief overview helps us see some of the potential arguments and challenges related to different foundations of the potential obligation to act. Furthermore, they are all capable of addressing the relational and collective aspects of ethics. Virtues often relate to and involve collaborating with others; consequences deal with consequences not just to oneself but other individuals and groups; and deontology also tends to deal with our duties or obligations to others.
Complicity, hypocrisy, and the individual in the collective. Sometimes we can readily identify the morally responsible individual who, for example, directly causes harm, or could have prevented harm but did not. In such cases, the three criteria of individual accountability referred to earlier (individual difference, control, and autonomy) suffice to dish out moral reproach, praise, and so on. We can easily pin the tail on the moral donkey.
Other times, however, harms are much more diffuse. They might be systemic or structural, leading to situations in which various structures generate harms to individuals or groups, without there really being one or a few individuals to easily pin the tail of accountability to. For example, if we believe that academic institutions unfairly benefit some while harming and excluding others, it would be unproductive to place the blame on a single government minister. These structures have evolved over many generations, and have been shaped by historical, social, and political influences, making it difficult to assign accountability to one person or small group.
The notion of complicity is again useful for coming to grips with individuals’ role in propagating harmful systems and practices—with others through collective action. Kutz15 describes it as follows:
[O]ur lives are increasingly complicated by regrettable things brought about through our associations with other people or with the social, economic, and political institutions in which we live our lives and make our livings. Try as we might to live well, we find ourselves connected to harms and wrongs, albeit by relations that fall outside the paradigm of individual, intentional wrongdoing.
Some of the examples he mentions are customers buying tables made from rainforest wood, owning stocks in corrupt companies, being a citizen of a state harming others through military action, or living on land once inhabited by indigenous people.
Technology now plays an important role in making complicity ever more relevant. For example, user-driven social networks such as X are developed and deployed by somewhat easily identified individuals. However, I as a user have spent much time and effort making content for the platform, have guided people toward it, and have engaged with others in ways that might have kept them there. Many of us have spent considerable time scaffolding the structure that is a social network this way, and the potential harms and goods stemming from it cannot be isolated to one or a few persons in control of it. I am complicit. Other users are complicit. Regulators, politicians, and media are complicit. At least, that is what I am arguing here.
Such forms of complicity in technological harms are central for understanding the scope and nature of the potential obligation for ethicists to act based on their findings. That is because complicity reveals how their actions or inaction contribute to harms and accountability, clarifying the ways in which they are involved in perpetuating these harms. And since one of the prerequisites of having an obligation to act is to understand and recognize both the harms and one’s own role in propagating them, AI ethicists will more easily be obliged to take action—that is, have an obligation to act to prevent the harmful practices in which they are complicit.
The ethicist’s challenge if they do not act in accordance with their stated personal standards is that they can be accused of hypocrisy. An interesting example could be someone researching environmental sustainability and forcefully arguing that climate change is a huge problem, yet frequently flying across the globe for various purposes. If these are work-related purposes, they could argue that they believe their positive contributions by doing so outweigh the costs associated with traveling. If they travel for fun and leisure, avoiding charges of hypocrisy would be more difficult. They could, however, as I discussed earlier, argue that they have done their part through normative ethics, and that they do not see themself as responsible for making change happen. Perhaps they believe no individual is responsible until collective governmental action is taken to address the problem and, for example, eliminate the problem of free riders.
Similar challenges could arise for an environmental ethicist or animal rights scholar who eats meat, or a researcher vocally criticizing copyright violations and the environmental impacts of generative AI while still using it in their own research process. The latter might argue, in line with the climate activist who does leisure travel, that they don’t want to act unless others do so, not wanting to bear a disproportionate part of the costs of achieving change.
Those content with being partial or full hypocrites can clearly do descriptive and normative AI ethics. However, they will not as easily be able to do action AI ethics, as this is premised on the idea that the ethicist accepts a moral obligation to effect change in line with their knowledge and capacities.
While some might have a desire to impose their own moral standards on others, I posit that an obligation to effect change is something that can and should arise only from a personal realization and acceptance of the obligation. No one can impose such an obligation on others. For example, an ethicist adhering to objectivism cannot be expected to act upon an obligation to effect change that is premised on some notion of solidarity and collectivism. This reflects Næss’s eighth principle, which does not state that everyone has an obligation to act for change—only those who agree with and accept the preceding principles are obligated.
Doing Action AI Ethics: An Agenda
So where does that leave the ethicists wanting to do action AI ethics? What can a researcher do if they believe that, for example, X, has become toxic to individuals, to communities, to our politics, and to democracy itself, among other harms?
This is where it gets a bit difficult, as there is no one right answer to this question. As Næss stated, disagreements about strategies and priorities are unavoidable. For example, as a researcher I can state that my personal conviction is that I cannot be a part of the platform, and that I must abandon it. Others, however, will say that they believe that we must engage with the platform to change it. That might be part of why people such as Paris Marx and Evgeny Morozov stuck with X.
In closing, I propose some guiding questions for becoming an action AI ethicist, partly through individual actions and partly through actively contributing to making action AI ethics an effective movement through collaboration.
Engagement vs. divestment. The first question is to decide whether to engage with or distance oneself from the companies and services in question—here X, and thus also Musk and his ecosystem of companies and partners. This is similar to the debate about whether to “divest or engage” in the world of sustainable finance.4 For example, if an investor is concerned about a company’s lack of climate action, should they divest from the company or try to use their influence as a shareholder to change the company’s policies? This is also related to the two main strategies presented in Sætra et al.27—working for change from within or from outside of what they refer to as “the system.” The case of the passive investor in a company doing harm is also mentioned by Kutz15 in relation to complicity.
This is, in theory, an empirical question, but one that does not have a definite answer. We do not know in advance what the most effective strategy will be, so our actions must be guided by what we think is the best strategy for alleviating or mending the problems and challenges caused by the platform. That could, of course, be to try to turn X back into something good. If we thought it could be good at some point, that is. Or we could try to radically undermine, weaken, or destroy the platform. We might try to get users to leave; we could persuade advertisers to withdraw; or we could work to delegitimize X in the eyes of other influential actors. For example, in Norway, in mid-2024, most newspapers often referred to posts on X, even when there are other equally good sources of information. Also, even when X is not a highly relevant source, its posts are routinely not just linked to but also embedded in news stories. Through such editorial choices, editors are legitimizing the platform and sticking with X’s “legacy legacy,” so to speak—the legitimacy the platform had when it was Twitter. If we seek to undermine the platform, we might want that to stop.
Incremental vs. radical change. Associated with the first question is to decide whether you believe incremental change might be sufficient for fixing the problems at hand, or that radical change is necessary. Næss referred to his own theory as deep ecology, positioning it as an alternative to shallow ecology. The latter is concerned with fixing symptoms through incremental change, while deep ecology explicitly calls for radical change of economic and political systems. This, Næss argues, is required to achieve the necessary changes.20
While AI ethicists might agree on a description of the symptoms of deploying various AI systems, we tend to disagree about the more complete diagnosis of the problem—meaning the fuller description of why a certain symptom emerges. For example, Morozov tends to argue that capitalism is the core problem associated with modern AI’s harms,19 while others might believe the causes of, for example, discriminatory systems, can be isolated to bad data used for training and insufficient corrective measures in the end stages of development and deployment. The symptoms might be similar, but the cure—the theory of change describing what it would take to fix the symptoms—can be wildly different.
Individual vs. collective action. A major disagreement when discussing any kind of social problem concerns whether it should be seen as a problem that should be solved by individuals or one that requires systemic change.
In the case of X, some might believe it is up to individuals to stop frequenting and supporting the platform. If enough people do so, politicians will not feel forced to be there and advertisers and other users will find it less attractive. It is the same with climate change and other environmental challenges: Is it the customer’s duty to fly less and choose more expensive but more environmentally friendly products? Some say yes, while others strongly disagree.
The opposite of making individuals responsible is to call for collective voluntary or forced action. The first might come about if enough of us join forces and jointly pressure companies into change or leave their platforms, while the latter results from, for example, government action and regulation. In the EU, for example, the AI Act and the Digital Services Act relieve individual users of some responsibility to change the world of big tech, as they instead set clear limits on what big tech can do and how it can operate. However, such regulation is also the result of individuals joining forces in voluntary collective action to make policy changes, so the distinction is not absolute. That there is no clear distinction between individual and collective action is clearly shown in cases of complicity. Individuals are not relieved of moral accountability when they structurally support collectives that create harm (or good), even if they claim to be “just a small part” of said collectives.15 In all cases of collective action, individuals’ choices and actions have effects—positive or negative—and help either strengthen or weaken the collective.
For the action AI ethicist, it will be important to explore and identify their own stance on the importance and need for individual action vs. largely placing the impetus for action and accountability with other individuals or some form of collective. Næss, for example, was adamant that the researcher and all individuals engaged in an issue have a personal responsibility to act to effect change, but that change happens most effectively through joint action. Moreover, he believed that such change would not come about through top-down efforts or waiting for politics to play out—he championed the broad dissemination of knowledge and grassroots mobilization.21
Rivalry vs. support and collaboration. Another crucial success factor will be to build cooperative and powerful constellations, despite the many potential disagreements bound to arise between different individual and groups of AI ethicists. Once again drawing on Næss’s hopeful writings, we see that he believed meaningful cooperation can be achieved even among heterogenous individuals and groups—that is, if there is a joint core objective they agree on. In Næss’s case, it would be his seven premises. For us, it would be some new set of premises related to the dangers of X.
For exploratory purposes, we can propose a set of very basic and general principles focused on X, to see if they might serve a purpose similar to Næss’s premises:
-
X is a platform that does not sufficiently take responsibility for or regulate hateful speech, conspiracy theories, and other content, and this generates and/or exacerbates significant individual, social, and political harms.
-
X as a platform, despite its technical potential for relative neutrality, is trending toward a particular form of conservative right-wing politics and provides support and visibility for politicians and supporters of anti-democratic and authoritarian politics, particularly in the U.S.
-
X is not characterized by meaningful transparency, accountability, or predictability, and thus generates potential harms and uncertainties unacceptable for a platform with its reach and power.
-
X’s owner publicly supports a range of theories and holds a worldview inimical to many liberal-minded (in the political theory sense, not the party politics sense) people and ideologically and monetarily supported Donald Trump in the lead-up to and aftermath of the 2024 election. Any support for X and its owner thus with equates support for Trump and the broader MAGA movement.
There are many other, and likely better, potential premises and formulations, but this tentative list tests whether common ground can be found in the AI ethics community. AI ethics has been plagued by a wide range of vocal and public disagreements and reciprocal attacks and animosity between different groups, for example between what are referred to as the “AI ethics” and the “AI safety” people.29 A recurring topic is the argument from some group that other groups are focusing on the wrong problems, either because they are not real problems or because they are simply not as important as the problems focused on by the accusing group.
However, arguments can clearly be made that it is both possible and necessary to focus on more than one problem at once.29,31 Threats to individuals, groups, and political institutions from X, alongside threats linked to how the success of the platform strengthens its owner and his ideology, might be issues that unite, for example, those concerned with discriminatory AI systems, underrepresentation of and algorithmic harms to minority groups, AI threats to democracy, and fears of accelerationist attitudes to AI leading to dangerous AI systems down the line. X arguably constitutes risks of all types.
If one agrees these threats are real, a key concern would be to combat the “balkanization” of AI ethics and to join, create, and support constellations and movements actively seeking to undermine X—unless you believe you can solve the problem alone, that is, or that collective action is not required. A concrete example could be to either initiate or join efforts to get people to leave the platform, while actively seeking and developing alternatives for those joining this initiative. For example, efforts to develop a new community on a platform deemed to be less problematic might be helpful, as would efforts to explore ways in which collective action could help disseminate knowledge about the challenges of X, to generate research that helps better understand these challenges and how to remedy them, and to actively engage with actors developing better alternatives to help these efforts.
Personal costs vs. solidarity. An important issue already hinted at is that being an action AI ethicist will always come with certain costs. Technology companies—the ones being researched by AI ethicists—are largely the ones able and willing to pay the most for competencies related to AI, including AI ethics. Which means that if you are openly trying to combat these companies, you will be losing some personal opportunities. For gold, and for glory.
This further underscores the need for unity and solidarity among individuals and groups working toward a common goal. There is strength in numbers, and unity will increase the chances of successful rapid action—which would entail lower costs and potentially higher rewards.
Furthermore, the better organized an action AI ethics movement is, the better equipped it will be to use solidarity measures to shield junior researchers by, for example, having tenured and recognized researchers spearhead various initiatives and actions likely to be controversial or unpopular with those with power in the tech world.
Constructive attitudes and alternative-seeking. Action AI ethicists will also need to determine the directness of their involvement in identifying and seeking solutions to the problems that concern them. The duty described by Næss was to directly or indirectly act, which opens a wide array of strategies. One indirect way of taking action could be to simply find and fund or otherwise indirectly support actors working for the changes they deem necessary. Effective altruism is an example of such a strategy,10 insofar as its proponents can engage in activities far removed from the ethical challenges they see, if these activities allow them to earn money that they can subsequently funnel into support for relevant action. If I’m a top finance professional and AI ethicist, it might make the most sense for me to focus on maximizing my income and financially supporting others in direct action, rather than personally attacking the tech industry.
A more direct form would require the researcher themselves to act in line with their convictions. If I personally identified significant X-related challenges, I would have to take action in line with these findings. The most direct form of action would entail me directly speaking out against the platform, personally leaving the platform, and encouraging as many others as possible to do the same, guiding them in how to avoid the platform and actively seeking alternatives to it in the cases where X provides—or provided—something of real value and importance to people and society. Activists in the climate change space provide many examples of how scientists and others take direct action and expose themselves to anger, harm, ridicule, and punishment to directly confront the issues they care about.33 Incidentally, Arne Næss himself was an active participant in civil disobedience actions against the construction of hydro-power projects in Norway, objecting to how these projects destroyed nature and violated Indigenous rights.
The most successful action AI ethicist will be a part of the solution beyond only being an effective critic. The effective critic is, after all, largely the normative AI ethicist. To effectively change things, people require a path away from or toward something—not just to hear that something is good or bad. If I preach the dangers of X without taking part in identifying, disseminating, or creating alternatives, it is unlikely that most users will know how to act in ways conducive to achieving the changes we desire. Such constructive action will often require more than philosophers, however, as developers, private companies, citizens, and politicians will have crucial roles to play in creating alternatives to an undesirable state of affairs. Coalitions and cooperation are also crucial here, and that applies beyond the domain of AI ethics.
Three types of AI ethics at once. Finally, the aspiring action AI ethicist must see how action ethics builds on—and does not in any way stand in contrast to—descriptive and normative ethics. Strong action stems from solid analysis and ethical evaluations. Action without these becomes pure activism and not action AI ethics, as described here.
Seeing how the three forms all come together in action AI ethics is both important and challenging. First of all, identifying the proper course of action necessitates knowledge. Those wanting to do action AI ethics could do their own descriptive or normative analyses but, as with all research, they can and should do their best to connect their work to existing research. If they do, they can focus on further developing normative critiques and identifying effective action.
Another key reason to combine all three roles is that the action AI ethicist will be much better positioned to convince others and get them on board to develop the cooperation and coalitions described earlier. As already noted, this cooperation crosses professions, roles, and disciplines, and the ethicist providing a solid knowledge base will play a crucial role in these efforts.
Conclusion
In this article, I have argued that the AI ethicist can serve various functions related to understanding and shaping the implications of technological systems. I have used the case of X to demonstrate how doing ethics can consist of trying to describe factual consequences, evaluating these consequences, and directly or indirectly acting to bring about positive change.
While all three forms of ethics are necessary, I have argued that it is important for AI ethicists to recognize their potential complicity in the systems they study and consider whether they have a moral obligation to go beyond merely describing and evaluating the ethics of these systems. X is one example, but other platforms, such as Facebook, Instagram, and TikTok, are amenable to similar analyses. An obligation to act can be based on the special knowledge and capacities of AI ethicists, and might mean that they have a greater responsibility for mitigating these harms than others. Such a responsibility and obligation, however, cannot be imposed on them by others—it will have to come from their own analyses and recognition of such an obligation. The purpose of this article has partly been to foster such a recognition.
As technological systems increasingly permeate individuals’ lives and social structures, there is a greater need to actively evaluate and face the harms caused by these systems. Doing so will require effective cooperation and coalitions of those united in such efforts, as the ones in control of these technologies are people with great personal and institutional power.