As artificial intelligence (AI) systems increasingly mediate our social world, regulators rush to protect citizens from potential AI harms. Many AI regulations focus on assessing potentially biased outcomes of AI. But AI systems are always embedded into social contexts and decision-making processes that are typically distributed across a range of human and machine agents. Bias and discrimination can occur anywhere in this human-machine network. Only focusing on potentially biased outcomes of an AI system will not fix the bias and discrimination problems that are integral to the whole human-machine network. Addressing this issue means focusing AI accountability approaches on practices and processes, rather than just machines or just humans.
Let’s take the world of recruiting as a case study. Recruiting has become a frontier of AI-driven automation. AI recruiting tools support search for candidates on job platforms, candidate screening (such as video interviewing or technical interviews to test coding skills), crafting job descriptions, and integrating AI (for example, chatbots) into applicant tracking systems. Using these tools can also produce instances of discrimination. Infamous examples include Amazon’s sexist hiring AI1 and Facebook’s ageist and gendered job ads.2 Fueled by the COVID-19 pandemic and demand for remote recruiter-candidate interaction, the human resources (HR) tech market is large, and it continues to grow (projected to $39.90 billion by 2029).4
Even as particularly problematic tools are retired, issues of technology-mediated and AI-accelerated bias and discrimination persist. AI tools used in candidate assessments (such as interviews or tests) are prone to error, often disadvantage certain populations6 or are based on pseudo-scientific constructs.7 Regulators are paying heightened attention to the use of AI in recruiting and employment, with influential regulation focusing explicitly on AI in HR as a “high risk” area of AI deployment.3
But discrimination that persists in HR cannot be attributed solely to the AI. It is the result of a complex sociotechnical system that includes both AI and the many people engaged in HR processes and practices. In recruiting alone, this includes sourcing specialists, talent acquisition managers, recruiters, hiring managers, HR administrators, and others, who interact with and potentially make decisions about candidates. Various AI systems and other technologies are spread across that network of actors. What is needed to mitigate potential discrimination and harm is a closer look at the professional practice of recruiting, how recruiting professionals use and make sense of AI systems, and how this affects their discretionary decision making.
Keeping It Old School: The Persistence of Boolean Search
In low-volume recruiting (that is, recruiting from a scarce talent pool so finding candidates is hard), recruiters’ traditional professional practice revolves around Boolean search. When searching for talent in databases, they assemble the specifications of the job into what they anticipate will be a powerful Boolean string. A professionally crafted Boolean search string designed to locate a computer programmer who has experience with a particular group of programs and possesses leadership skills might appear as shown in Figure 1.

The Boolean search method is grounded in binary logic with a simple premise that statements can only be true or false. It has transcended its mathematical origins to become a cornerstone of information retrieval writ large (as anyone who has been taught to use a library catalog knows). Boolean search allows users to express the relationship between keywords in a search, rather than just the presence of the keywords (see Figure 2). Key for this are the three operators AND, OR, and NOT. Using the AND operator narrows the search by including only the results that contain all the specified keywords. The OR operator broadens the search to include results that contain either of the chosen keywords. The NOT operator excludes results that contain the keyword following it.

Boolean logic operates within recruiters’ minds as they carefully select the most fitting keywords for the role they are trying to fill. Working to match job specifications with ideal candidates, they turn to Boolean search across vast candidate databases (such as LinkedIn) into an epistemology—a way of knowing and understanding the world of potential hires.8
Constructing a Boolean search string for finding fitting job candidates is not merely a technical exercise. It is a labor-intensive and iterative process that demands creativity, analytical rigor, and often years of experience. Typically, recruiters invested considerable time and effort in iteratively refining their Boolean search strings. Each keyword selection, operator placement, and logical structure acts as a deliberate choice, designed to surface the “right” candidate profiles. This process transcends mere keyword lists; it demands an iterative dance between logic and intuition, honed through experience and a deep knowledge of the target talent pool. For a recruiter, finding the “perfect” Boolean string is like finding a vein of gold. It can vastly improve efficiency and efficacy of candidate search for a specific role, or type of role. Boolean search allows recruiters to iteratively adapt their queries in real-time based on the feedback provided through the search engine results. This is where recruiters can exercise the discretionary decision-making power that is the essence for their own job: making the decision on who is the “right” candidate.
In traditional, non-AI-driven information retrieval by way of Boolean search, recruiters can easily discern the relationship between the keywords expressed in the Boolean string and the search results. This gives them discretionary room to maneuver. They can rely on the search engine faithfully delivering to their query, and they can predict the effects of tweaks in their Boolean expression. In other words, traditional Boolean search provides the kind of transparency recruiters require for the discretionary decisions that are specific to their profession.
AI-Driven Search and Boolean Epistemology
In AI-driven candidate search, however, the system is not faithfully delivering on the keyword relationship expressed in the Boolean string. AI-systems, including generative AI systems that respond to prompts, are calibrated to produce statically probable outputs based on a search or prompt (as well as various unknown factors, such as previous search behavior), rather than the precise keyword relationship. Here, the system interprets the keywords in ways that are not discernable (and therefore actionable) by recruiters. For example, a recruiter may include the term “New York City” in the string because they need a candidate who is based in New York City for tax reasons. The AI may interpret this in undesirable ways and, for example, suggest candidates as “most relevant” (and ranked at the top of the search results) who are based in Hoboken, NJ, USA. A new search with the exact same Boolean expression run a few hours later may show candidates based in the Hudson Valley, NY, USA.
It remains unclear to the recruiter, how and why the AI system powering the search made this leap. The Boolean epistemology that recruiters traditionally deploy affects how they make sense of AI and influences if and where mistrust and potential bias manifest in HR. The “interpretive lift” undertaken by the AI system is palpable but never consistent or squarable with the professional epistemology recruiters use, and it curbs the discretionary decision space available to them. They cannot tweak the keywords to better understand the effects of each one on the search result. In AI-driven search, these causal effects cannot be known by recruiters. In other words: Boolean epistemology and AI epistemology clash.
Navigating the Chasm
The clash between these two epistemologies leads recruiters to mistrust the AI systems that their own employers often require them to use. Recruiters are acutely aware of the “epistemological clash.” They know that, through the machine learning feedback loop (in which data generated through the interaction with an AI system re-enters the system and affects its predictions), their interactions with AI-driven search engines are recorded and affect subsequent search results. To preserve their discretionary decision-space (which is central to their professional identity), recruiters sometimes try to “neutralize” the AI-driven search system or “confuse” it. They may input a vast range of different Boolean search strings, save all results in a separate spreadsheet, and manually comb through them. They may also manually infer features they deem important rather than relying on the machine to do it. For example, they may infer gender or racial identity from location or educational background to try to ensure a diverse candidate pool and avoid bias and discrimination.
Viewed as a larger socio-technical work system, recruiters’ interactions with AI-driven search tools reclaim discretionary capacity and allow them, not machines, to make decisions about candidates. This involves substantial work as Boolean searches must be meticulously composed and continuously tweaked, which reduces the alleged time-saving value of AI systems. It also demonstrates how AI-driven recruiting systems may be used in ways that sustain, rather than curb, issues of (human) bias and discrimination.
Thus, it is insufficient to address AI discrimination by looking at the potentially biased outcomes of an AI system. A more nuanced approach is needed as the field of AI ethics and accountability and transparency progresses, and as AI regulation becomes more common. This becomes particularly important as generative AI systems enter the HR space making the AI’s interpretation of search commands or prompts even less transparent and adding the risk of “hallucinations.”
Understanding how professional discretion is affected by new forms of AI-driven automation, within and beyond HR, is extremely important. We must treat the black box of AI as a socio-technical phenomenon in which professional epistemologies and practices clash with hidden AI functionalities. Concretely, this means integrating work practices and decision-making processes into AI accountability efforts. Only by taking this larger systems view can we avoid the “many hands” problem that makes it so hard to identify who is responsible for the harms that computer systems can cause.5 Centering what people are doing and how—including with machines—rather than treating machines as the sole focus of regulatory attention, can help address the continuation of human-machine bias.
Conclusion
AI functionalities clash with the Boolean epistemology of candidate search in professional recruiting. This encourages human intervention and enables continued employment bias and discrimination. Employment fairness is of enormous ethical importance, but HR recruitment is just one of many areas of life where AI has been implicated in bias and discrimination. Focusing solely on AI opacity as the cause of bias and discrimination misses the fundamental socio-technical nature of the phenomenon and points to ineffectual solutions.
We are in urgent need of more empirically grounded research on how AI is actually used so that we understand and address where and how bias and discrimination can occur in the distributed human-machine decision-system networks that influence important life outcomes. This is increasingly urgent with the rise of generative AI technologies such as ChatGPT and their rapid adoption. Focusing accountability approaches on practices, processes, and technologies rather than just machines or just humans, is a crucial first step toward building a just society.