‘Smart’ (or Machiavellian?) surveillance: The power of terminology

Yellow background and outline of a face, signifying facial recognition technology. Image by Safa, with visual elements from La Loma, used with permission.

Image by Safa, with visual elements from La Loma, used with permission.

This article was written by Safa for the seriesDigitized Divides originally published on tacticaltech.org. An edited version is republished by Global Voices under a partnership agreement. 

The terms that are used to describe technology can shape how we think about it. The word “smart” has a positive connotation, in most cases, but when it comes to technology, “smart” is usually used interchangeably with “efficient.” Imagine if instead of calling systems of surveillance “smart,” we called them “Machiavellian” — how might that change our discourse, acceptance, and adoption of them? 

Unreliable systems

Tools of monitoring and control, such as CCTV, rely on facial recognition technology, which automatically identifies unique facial data, including measurements like the distance between the eyes, width of the nose, depth of eye sockets, shape of cheekbone, and length of jawline. Facial recognition is used by governments, police, and other agencies around the world, with significant results. 

One unprecedented operation by US law enforcement resulted in hundreds of children and their abusers being identified in just three weeks. This technology has also been used to find missing and murdered Indigenous people (MMIP), helping 57 families find answers in just three years. While these results are indeed remarkable and reveal the ways in which the application of technologies can be used to help people, there have also been numerous cases of facial recognition being used by US law enforcement in ways that have harmed people. 

An app called CBP One, which is required by asylum seekers at the US-Mexico border, includes a requirement for people to register themselves in a facial recognition system. But, that system “[fails] to register many people with darker skin tones, effectively barring them from their right to request entry into the US.” The systems centralizing data of asylum-seekers and migrants make longitudinal tracking of children possible. Facial recognition technologies are also used by ICE (the US’s Immigration and Customs Enforcement agency) to monitor and surveil people awaiting deportation hearings. 

In one study on facial recognition systems, MIT researcher Joy Buolamwini found that “darker-skinned females are the most misclassified group (with error rates of up to 34.7 percent). The maximum error rate for lighter-skinned males is 0.8 percent.” Harvard researcher, Alex Najibi, described how “Black Americans are more likely to be arrested and incarcerated for minor crimes than White Americans. Consequently, Black people are overrepresented in mugshot data, which face recognition uses to make predictions, explaining how Black Americans are more likely than White Americans to become trapped in cycles and systems of racist policing and surveillance. 

This sentiment is echoed in a report by the project S.T.O.P. — The Surveillance Technology Oversight Project. The UK and China are also among the countries that practice “predictive policing.” One researcher focusing on China describes it as “a more refined tool for the selective suppression of already targeted groups by the police and does not substantially reduce crime or increase overall security.” So the issue here is not simply about flawed datasets; it is discrimination that already exists in society, where people who hold positions of power or have police or military force can use technology to enhance their oppression of particular groups of people. Larger datasets will not remedy or negate the problem of people acting upon discrimination, racism, or other types of bias and hatred. 

Algorithms are created by people (who inherently have their own biases) and are developed using our data. The tools trained on our data can be used to harm other people. Algorithms are also used by governments, law enforcement, and other agencies worldwide. Tools and services from Google, Amazon, and Microsoft have all been used by Israel in its war on Gaza. In the United States, algorithms have been used to score risk levels for individuals who have committed crimes, assessing their likelihood of committing future crimes. But these algorithms have been found by researchers to be “remarkably unreliable” and include a significant amount of bias in their design and implementation.

In Spain, an algorithm was used to predict how likely a domestic abuse survivor would be to be abused again, with the intention to distribute support and resources to people who need it most urgently, in an overburdened system. But the algorithm isn’t perfect, and over-reliance on such flawed tools in high-stakes situations has had dire consequences. In some cases, survivors mislabelled as “low risk” have been murdered by their abusers despite their best efforts to seek help and report the abuse to authorities.

In the Netherlands, tax authorities used an algorithm to help them identify child care benefits fraud, with tens of thousands of lower-income families being penalized, resulting in many falling into poverty and even more than a thousand children being wrongfully put into foster care. “Having dual nationality was marked as a big risk indicator, as was a low income [… and] having Turkish or Moroccan nationality was a particular focus.” 

Israel surveils and oppresses Palestinians

Israel’s surveillance industry is world famous. A 2023 report by Amnesty International mapped the visible Israeli surveillance system and found one or two CCTV cameras every five meters in Jerusalem’s Old City and Sheikh Jarrah in East Jerusalem. 

Since 2020, Israel’s military-run “Wolf Pack” has been in use; this is a vast and detailed database profiling virtually all Palestinians in the West Bank, including their photographs, family connections, education, and more. The Wolf Pack includes “Red Wolf,” “White Wolf,” and “Blue Wolf” tools: 

  • Red Wolf: The Red Wolf system is part of the Israeli government’s official CCTV facial recognition infrastructure to identify and profile Palestinians as they pass through checkpoints and move through cities. It has been reported that Israel’s military uses Red Wolf in the Palestinian city of Hebron. According to a project by B’Tselem and Breaking the Silence, the Israeli military has set up 86 checkpoints and barriers across 20 percent of Hebron, referred to as “H2,” that is under Israeli military control. The checkpoints are hard to avoid in H2. As Masha Gessen wrote, Palestinians living there “go through a checkpoint in order to buy groceries and again to bring them home.” According to UNRWA, 88 percent of children cross checkpoints on their way to and from school.
  • White Wolf: Another app, called White Wolf, is available to official Israeli military personnel who are guarding illegal settlements in the West Bank, which allows them to search the database of Palestinians. Since Israel’s war on Gaza began after the October 7, 2023, attacks by the Islamic Resistance Movement (aka Hamas) on Israelis, Israel has rolled out a similar facial recognition system registry of Palestinians in Gaza. 
  • Blue Wolf: Using the app called Blue Wolf, the Israeli military has been carrying out a massive biometric registry of Palestinians, often at checkpoints and by gunpoint, sometimes at people’s private homes in the middle of the night. Israeli soldiers take pictures of Palestinians, including children, sometimes by force. Israeli soldiers also note within the app any “negative impressions [they] have of a Palestinian’s conduct when encountering them.” One source added, “It’s not that the military has said, let’s make the Blue Wolf so [the Palestinians] can pass through more easily. The military wants to enter the people into its system for control.”

A 2025 article also revealed how the Israeli military was using a large language model (such as that used by tools like ChatGPT) to surveil Palestinians. One Israeli intelligence source stated, “I have more tools to know what every person in the West Bank is doing. When you hold so much data, you can direct it toward any purpose you choose.” While the Israeli military is not the only government-sanctioned example of training AI tools on civilian data, it offers an important insight into how the latest technologies can be adopted for widespread monitoring and control.

As researcher Carlos Delclós said, “Privacy is not merely invaded; it is obliterated, as human lives are fragmented into datasets optimised for corporate gain,” and the same message can be extended to political gain. Regardless of whether we call technology by positive or negative terms, at the end of the day, the technology itself cannot be separated from the operators (i.e. humans) who deploy it. If the people who use these technologies are also inhabiting societies and working within systems that have documented concerns of discrimination and/or control, it seems quite possible that the tech will be used to cause harm. We don’t even need to imagine it. We can simply look around with both eyes open.

Start the conversation

Authors, please log in »

Guidelines

  • All comments are reviewed by a moderator. Do not submit your comment more than once or it may be identified as spam.
  • Please treat others with respect. Comments containing hate speech, obscenity, and personal attacks will not be approved.