Safa – Global Voices https://globalvoices.org Citizen media stories from around the world Sun, 09 Nov 2025 18:51:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Citizen media stories from around the world Safa – Global Voices false Safa – Global Voices webmaster@globalvoices.org Creative Commons Attribution, see our Attribution Policy for details. Creative Commons Attribution, see our Attribution Policy for details. podcast Citizen media stories from around the world Safa – Global Voices https://globalvoices.org/wp-content/uploads/2023/02/gv-podcast-logo-2022-icon-square-2400-GREEN.png https://globalvoices.org Do you follow?: How technology can exacerbate ‘information disorder’  https://globalvoices.org/2025/11/10/do-you-follow-how-technology-can-exacerbate-information-disorder/ Mon, 10 Nov 2025 09:30:34 +0000 https://globalvoices.org/?p=846070 ‘It is very, very difficult to dislodge [misinformation] from your brain’

Originally published on Global Voices

Two pink birds with strings of emails beneath them. Image by Liz Carrigan and Safa, with visual elements from Alessandro Cripsta, used with permission.

Image by Liz Carrigan and Safa, with visual elements from Alessandro Cripsta, used with permission.

This article was written by Safa for the series “Digitized Divides,originally published on tacticaltech.org. An edited version is republished by Global Voices under a partnership agreement.

Social media has been a key tool of information and connection for people who are part of traditionally marginalized communities. Young people access important communities they may not be able to access in real life, such as LGBTQ+ friendly spaces. In the words of one teen, “Throughout my entire life, I have been bullied relentlessly. However, when I’m online, I find that it is easier to make friends… […] Without it, I wouldn’t be here today.” But experts are saying that social media has been “both the best thing […] and it’s also the worst” to happen to the trans community, with hate speech and verbal abuse resulting in tragic real-life consequences. “Research to date suggests that social media experiences may be a double-edged sword for LGBTQ+ youth that can protect against or increase mental health and substance use risk.” 

In January 2025, Mark Zuckerberg announced that Meta (including Facebook and Instagram) would end their third-party fact-checking program in favor of the model of “community notes” on X (formerly Twitter). Meta’s decision included ending policies that protect LGBTQ+ users. Misinformation is an ongoing issue across social media platforms, reinforced and boosted by the design of the apps, with the most clicks and likes getting the most rewards, whether they be rewards of attention or money. Research found that “the 15% most habitual Facebook users were responsible for 37% of the false headlines shared in the study, suggesting that a relatively small number of people can have an outsized impact on the information ecosystem.”

Meta’s pledge to remove their third-party fact-checking program has raised alarm bells among journalists, human rights organizations, and researchers. The UN’s High Commissioner for Human Rights, Volker Türk, said in response: “Allowing hate speech and harmful content online has real world consequences.” Meta has been implicated in or accused of supercharging the genocide of the Rohingya in Myanmar, as well as fueling ethnic violence in Kenya, Ethiopia, and Nigeria, at least in part due to the rampant misinformation on its platform. 

“We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook … are affecting societies around the world,” said one leaked internal Facebook report from 2019. “We also have compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.” The International Fact-Checking Network responded to the end of the nine-year fact-checking program in an open letter shortly after Zuckerberg’s 2025 announcement, stating that “the decision to end Meta’s third-party fact-checking program is a step backward for those who want to see an internet that prioritizes accurate and trustworthy information.”

Unverifiable posts, disordered feeds

The algorithms behind social media platforms control which information is prioritized, repeated, and recommended to people in their feeds and search results. But even with several reports, studies, and shifting user behaviors, the companies themselves have not done much to adapt their user interface designs to catch up to the more modern ways of interaction and facilitate meaningful user fact-checking.

Even when media outlets publish corrections to false information and any unsubstantiated claims they perpetuate, it isn’t enough to reverse the damage. As described by First Draft News: “It is very, very difficult to dislodge [misinformation] from your brain.” When false information is published online or in the news and begins circulating, even if it is removed within minutes or hours, the “damage is done,” so to speak. Corrections and clarifying statements rarely get as much attention as the original piece of false information, and even if they are seen, they may not be internalized.  

Algorithms are so prevalent that, at first glance, they may seem trivial, but they are actually deeply significant. Well-known cases like the father who found out his daughter was pregnant through what was essentially an algorithm, and another father whose Facebook Year in Review “celebrated” the death of his daughter, illustrate how the creators, developers, and designers of algorithmically curated content should be considerate of worst-case scenarios. Edge cases, although rare, are significant and warrant inspection and mitigation. 

Furthering audiences down the rabbit hole, there have been a multitude of reports and studies that have found how recommendation algorithms across social media can radicalize audiences based on the content they prioritize and serve. “Moral outrage, specifically, is probably the most powerful form of content online.” A 2021 study found that TikTok’s algorithm led viewers from transphobic videos to violent far-right content, including racist, misogynistic, and ableist messaging. “Our research suggests that transphobia can be a gateway prejudice, leading to further far-right radicalization.” YouTube was also once dubbed the “radicalization engine,” and still seems to be struggling with its recommendation algorithms, such as the more recent report of YouTube Kids sending young viewers down eating disorder rabbit holes. Ahead of German elections in 2025, researchers found that social media feeds across platforms, but especially on TikTok, skewed right-wing. 

An erosion of credibility

People are increasingly looking for their information in different ways, beyond traditional news media outlets. A 2019 report found that teens were getting most of their news from social media. A 2022 article explained how many teens are using TikTok more than Google to find information. That same year, a study explored how adults under 30 trust information from social media almost as much as national news outlets. A 2023 multi-country report found that fewer than half (40 percent) of total respondents “trust news most of the time.” Researchers warned the trajectory of information disorder could result in governments steadily taking more control of information, adding “access to highly concentrated tech stacks will become an even more critical component of soft power for major powers to cement their influence.” 

Indonesia’s 2024 elections saw the use of AI-generated digital avatars take center stage, especially in capturing the attention of young voters. Former candidate and now President Prabowo Subianto used a cute digital avatar created by generative AI across social media platforms, including TikTok, and was able to completely rebrand his public image and win the presidency, distracting from accusations of major human rights abuses against him. Generative AI, including chatbots like ChatGPT, is also a key player in information disorder because of how realistic and convincing the texts and images it produces. 

Even seemingly harmless content on spam pages like “Shrimp Jesus” can result in real-world consequences, such as the erosion of trust, falling for scams, and having one’s data breached by brokers who feed that information back into systems, fueling digital influence. Furthermore, the outputs of generative AI may be highly controlled. “Automated systems have enabled governments to conduct more precise and subtle forms of online censorship,” according to a 2023 Freedom House report. “Purveyors of disinformation are employing AI-generated images, audio, and text, making the truth easier to distort and harder to discern.”

As has been echoed time and again throughout this series, technology is neither good nor bad — it depends on the purpose for which it is used. “Technology inherits the politics of its authors, but almost all technology can be harnessed in ways that transcend these frameworks.” These various use cases and comparisons can be useful when discussing specific tools and methods, but only at a superficial level — for instance, regarding digital avatars which were mentioned in this piece. 

One key example comes from Venezuela, where the media landscape is rife with AI-generated pro-government messages and people working in journalism face threats of imprisonment. In response, journalists have utilised digital avatars to help protect their identities and maintain privacy. This is, indeed, a story of resilience, but it sits within a larger and more nefarious context of power and punishment. While any individual tool can reveal both benefits and drawbacks in its use cases, zooming out and looking at the bigger picture reveals power systems and structures that put people at risk and the trade-offs of technology are simply not symmetrical. 

Two truths can exist at the same time, and the fact that technology is used for harnessing strength and is used for harming and oppressing people is significant.

]]>
Behind our screens: The truth about ‘artisanal’ mining and ‘natural’ technology https://globalvoices.org/2025/10/05/behind-our-screens-the-truth-about-artisanal-mining-and-natural-technology/ Sun, 05 Oct 2025 12:30:32 +0000 https://globalvoices.org/?p=844378 Revealing the true costs of cobalt extraction

Originally published on Global Voices

Image by Liz Carrigan and Safa, with visual elements from Yiorgos Bagakis, Alessandro Cripsta, and La Loma, used with permission.

This article was written by Safa for the series ‘Digitized Divides’ and originally published on tacticaltech.org. An edited version is republished by Global Voices under a partnership agreement.

When people talk about “natural” versus “artificial,” there is an assumption that technology sits on the artificial side, but the elements and materials it is made from come from the earth and are handled by many people. 

What really is “natural,” after all? “It is impossible to talk about a green energy transitioning world without these minerals,” said humanist, leader and speaker Kave Bulambo in a 2024 speech. “When you start to dig deep to try and understand this equation, you realize that under this shiny Big Tech movement lies a world of exploitation for men, women, and even children laboring in cobalt mines in the [Democratic Republic of] Congo.”

It would be disingenuous to attempt to disentangle the human rights abuses connected to creating technologies from their environmental impacts. Siddarth Kara, a researcher of modern-day slavery, discussed the environmental impacts of cobalt mining: “Millions of trees have been cut down, the air around mines is hazy with dust and grit, and the water has been contaminated with toxic effluents from the mining processing.” 

Cobalt and ‘green’ energy

Cobalt is a stone that has an almost eerie blue color — for centuries, it has been used in the arts. It has also become essential for the manufacturing of rechargeable batteries — like those that enable smartphones, laptops, electric cars, and more. Cobalt is just one of the natural resources powering the “green energy revolution.” But this important stone can be toxic to touch and breathe, especially in high doses. 

Large deposits of cobalt have been found in the DRC, accounting for over 70 percent of the world’s reserves. To understand the harmful effects of cobalt mining in the DRC, it is essential to consider its colonial history. Continued exploitation of the country’s resources persisted, even after it gained formal independence in 1960, leaving a legacy that continues to shape the country’s mining sector today. Kolwezi, a city in the DRC, was built by Belgium under an apartheid-style system of urban segregation, and now has many large open-pit mines situated in and around its periphery. 

Both multinational companies with concessions and artisanal miners are involved in cobalt mining in the DRC, though industrial mines now dominate the region. Artisanal and small-scale mining (ASM) remains widespread, with thousands of informal miners working in dangerous conditions to extract cobalt by hand. Kara described how so-called “artisanal miners” — including children — are digging for cobalt: “The bottom of the supply chain, where almost all the world's cobalt is coming from, is a horror show.” 

What comes to mind when you think of something “artisanal?” It is probably not informal workers digging in hazardous, often toxic conditions, either earning a subsistence income for their families or working in small groups to extract minerals for commercial sale. “Artisanal” has a meaning of small-scale and handmade, which is true in a sense for the work of “artisanal miners.” But the term ‘artisanal’ is evocative of a quaint neighborhood farmers’ market or traditional handmade cheese or soap — not of children and adults digging toxic stones from the ground with their bare hands at gunpoint. 

The term partly comes from its low-tech nature, as it involves individuals mining deposits that are either unprofitable, unsafe, or otherwise unsuitable for large-scale mining companies. Yet, artisanal mining is far from small-scale. Over 100 million people worldwide are engaged in or rely on the income it generates. While it may seem more wholesome than industrial mining, an industry with one of the worst track records for human rights abuses, artisanal mining often lacks environmental and worker safeguards, as well as protections for women’s and children’s rights. 

This form of mining is common in Kolwezi, especially in areas where people have been displaced by large-scale mining projects. Despite attempts to formalise the sector, informal mining persists, with reports of “Creuseurs” (translated as “diggers,” as they are known locally) continuing to dig under their homes or in newly “illegal sites outside the formal mine boundaries. As one miner, Edmond Kalenga, put it: “The minerals are like a snake moving through the village. You just follow the snake”.

‘Blood cobalt’

A 2022 Amnesty International report detailed several case studies of human rights abuses at three sites where they used documentary evidence, satellite images, and interviews with former residents to determine that people had been forcibly evicted from their homes, in the name of energy transition mining. Forced evictions constitute a fundamental breach of human rights, and lead to loss of livelihood, and the loss of other human rights such as access to basic services, including health and education. The forced evictions occurred as part of the government's efforts to formalize the mining sector, carried out in collaboration with mining companies. People living close to polluted mines are exposed to severe health risks. The DRC mining region is one of the 10 most polluted areas in the world. Research suggests a correlation between exposure to heavy metals such as cobalt and birth defects, and children have been found with a high concentration of cobalt in their urine.

In addition to the human rights violations already mentioned, the innumerable environmental and health costs are interconnected, with issues such as biodiversity loss, pollution (air, soil, water), and the socio-economic consequences of job insecurity, violence, and loss of livelihoods. These impacts also lead to further challenges, including displacement, gender-based violence, and the erosion of cultural knowledge. Diamonds are not the only conflict mineral; as you can see, cobalt is among the many minerals which are extracted through degrading means, with devastating results. 

Companies that make lithium batteries, such as Tesla, occasionally respond to public calls for supply chain transparency; however, as demand for cobalt grows, businesses involved in battery manufacturing must pay attention to ethical and human rights issues along the entire supply chain. Alphabet (Google’s parent company), Apple, Dell, Microsoft, and Tesla have all been accused of purchasing cobalt which was gathered by means of forced labor, and deliberately obscuring their dependence on child labor — including those in extreme poverty. 

While the US court found that companies purchasing from suppliers were not responsible for the practices of suppliers, further doubts have already been raised against Apple. “It is a major paradox of the digital era that some of the world’s richest, most innovative companies are able to market incredibly sophisticated devices without being required to show where they source raw materials for their components,” said Emmanuel Umpula, executive director of Afrewatch (Africa Resources Watch). 

The European Parliament has voted in a law that large companies are obliged to conduct human rights and environmental due diligence  — a step towards holding corporations accountable for rights violations of their suppliers. But supply chains themselves are not necessarily reliable narrators. In the case of cobalt, suppliers may put the child-labor cobalt together with the child-labor-free cobalt in refineries, making it difficult or even impossible to trace. Furthermore, child-labor free cobalt does not necessarily mean it is free of human exploitation and jarring conditions. For more in-depth information on due diligence and accountability in the DRC’s mining sector, the Carter Center highlights several key recommendations.

Our energy consumption will only continue to increase with developments like ChatGPT, cryptocurrencies, and faster internet. One researcher found that using generative AI to create one image uses as much energy as charging a smartphone. A report by Goldman Sachs, a multinational investment firm, found that one AI-powered search used 10 times more electricity than a regular search. Both Google and Microsoft have self-reported that their carbon emissions have grown as a result of AI. With water and food scarcity being real-world threats and an ever-warming climate, how long will the planet be able to sustain these systems? When we finally take a critical look at the nature that’s powering our screens, we may see its poisonous impacts on people and the planet.

]]>
Systematized supremacy: The consequences of blind faith in technology https://globalvoices.org/2025/09/22/systematized-supremacy-the-consequences-of-blind-faith-in-technology/ Mon, 22 Sep 2025 06:30:30 +0000 https://globalvoices.org/?p=843846 Technology itself isn’t good or bad; it is about the humans behind it

Originally published on Global Voices

Illustration of two digital faces on either side os a bank of technologies with people looking at it. Image by Liz Carrigan and Safa, with visual elements from Yiorgos Bagakis and La Loma, used with permission.

Image by Liz Carrigan and Safa, with visual elements from Yiorgos Bagakis and La Loma, used with permission.

This article was written by Safa for the series ‘Digitized Divides’ and originally published on tacticaltech.org. An edited version is republished by Global Voices under a partnership agreement.

Technology can be used to help people, to harm people, but it also isn’t necessarily an either/or situation —  it can be used simultaneously for the benefit of one person or group while harming another person or group. 

While some may ask whether the benefits of using personal data to implement widespread policies and actions outweigh the harms, comparing the benefits and harms in this balanced, binary, two-sided approach is a misguided way to assess it critically, especially when the harms include violence against civilians. After all, human suffering is never justified, and there are no ways to sugarcoat negative repercussions in good faith. Technological bothsidesism attempts to determine the “goodness” or “brownie points” of technology, which is a distraction, because technology itself isn’t good or bad — it is about the humans behind it, the owners and operators behind the machines. Depending on the intentions and aims of those people, technology can be used for a wide variety of purposes.  

Lucrative and lethal

Israel uses data collected from Palestinians to train AI-powered automated tools, including those co-produced by international firms, like the collaboration between Israel’s Elbit Systems and India’s Adani Defence and Aerospace, that have been deployed in Gaza and across the West Bank. Israeli AI-supercharged surveillance tools and spyware, including Pegasus, Paragon, QuaDream, Candiru, Cellebrite, as well as AI weaponry, including the Smart Shooter and Lavender, are world-famous and exported to many places, including South Sudan and the United States

The US is also looking into ways to use home-made and imported facial recognition technologies at the US–Mexico border to track the identities of migrant children, collecting data they can use over time. Eileen Guo of MIT Technology Review wrote: “That this technology would target people who are offered fewer privacy protections than would be afforded to US citizens is just part of the wider trend of using people from the developing world, whether they are migrants coming to the border or civilians in war zones, to help improve new technologies.” In addition to facial recognition, the United States is also collecting DNA samples of immigrants for a mass registry with the FBI.

In 2021, US-headquartered companies Google and Amazon jointly signed an exclusive billion-dollar contract with the Israeli government to develop “Project Nimbus,” which was meant to advance technologies in facial detection, automated image categorization, object tracking, and sentiment analysis for military use — a move that was condemned by hundreds of Google and Amazon employees in a coalition called No Tech for Apartheid

The Israeli army also has ties with Microsoft for machine learning tools and cloud storage. These examples are brought in here to show the imbalance of power within the greater systems of oppression at play. These tools and corporate ties are not accessible to all potential benefactors; it would be inconceivable for Google, Amazon, and Microsoft to sign these same contracts with, say, the Islamic Resistance Movement (Hamas).

‘Smart’ weapons, nightmare fuel

Former US President Barack Obama is credited with normalizing the use of armed drones in non-battlefield settings. The Obama administration described drone strikes as “surgical” and “precise,” at times even claiming that the use of armed drones resulted in not “a single collateral death,” when that was patently false. Since Obama took office in 2009, drone strikes became commonplace and even expanded in US international actions (in battlefield and non-battlefield settings) of the subsequent administrations. 

Critics say the use of drones in warfare gives governments the power to “act as judge, jury, and executioner from thousands of miles away” and that civilians “disproportionately suffer” in “an urgent threat to the right to life.”  In one example, the BBC described Russian drones as “hunting” Ukrainian civilians. 

In 2009, Human Rights Watch reported on Israel’s use of armed drones in Gaza. In 2021, Israel started deploying “drone swarms” in Gaza to locate and monitor targets. In 2022, Omri Dor, commander of Palmachim Airbase, said, “The whole of Gaza is ‘covered’ with UAVs that collect intelligence 24 hours a day.” In Gaza, drone technology has played a major role in increasing damage and targets, including hybrid drones such as “The Rooster” and “Robodogs” that can fly, hover, roll, and climb uneven terrain. Machine gun rovers have been used to replace on-the-ground troops. 

The AI-powered Smart Shooter, whose slogan is “one-shot, one-hit,” boasts a high degree of accuracy. The Smart Shooter was installed during its pilot stage in 2022 at a Hebron checkpoint, where it remains active to this day. Israel also employs “smart” missiles, like the SPICE 2000, which was used in October 2024 to bomb a Beirut high-rise apartment building

The Israeli military is considered to be one of the top 20 most powerful military forces in the world. Israel claimed that it conducts “precision strikes” and does not target civilians, but civilian harm expert Larry Lewis has said Israel’s civilian harm mitigation strategies have been insufficient, with their campaigns seemingly designed to create risk to civilians. The aforementioned technologies employed by Israel have helped their military use disproportionate force to kill Palestinians in Gaza en masse. As an IDF spokesperson described, “We’re focused on what causes maximum damage.” 

While AI-powered technologies reduce boots on the ground and, therefore, potential injuries and casualties of the military who deploy them, they greatly increase casualties of those being targeted. The Israeli military claims AI-powered systems “have minimized collateral damage and raised the accuracy of the human-led process,” but the documented results tell a different story. 

Documentation reveals that at least 13,319 of the Palestinians who were killed were babies and children between 0 and 12 years of age. The UN’s reports of Palestinian casualties are said to be conservative by researchers, who estimate the true death toll to be double or even more than triple. According to one report: “So-called ‘smart systems’ may determine the target, but the bombing is carried out with unguided and imprecise ‘dumb’ ammunition because the army doesn’t want to use expensive bombs on what one intelligence officer described as ‘garbage targets.’” Furthermore, 92 percent of housing units were destroyed in Gaza, as well as 88 percent of school buildings, and 69 percent of overall structures across Gaza have been destroyed or damaged

In 2024, UN experts deplored Israel’s use of AI to commit crimes against humanity in Gaza. Regardless of all the aforementioned information, that same year, Israel signed a global treaty on AI developed by the Council of Europe for safeguarding human rights. Seeing how Israel has killed such a large number of Palestinians using AI-powered tools, and connected to technologies which are used in daily life, such as WhatsApp, is seen by some as a warning sign of what is possible to befall them one day, but is seen by others as a blueprint for efficiently systematizing supremacy and control. 

This piece positions that it isn’t just about the lack of human oversight with data and AI tools that is the issue; actually, who collects, owns, controls, and interprets the data and what their biases are (whether implicit or explicit) is a key part in understanding the actual and potential for harm and abuse. Furthermore, focusing exclusively on technology in Israel’s committing of genocide in Gaza, or any war for that matter, could risk a major mistake: absolving the perpetrators’ responsibility for crimes they commit using technology. When over-emphasizing the tools, it can become all too easy to redefine intentional abuses as machine-made mistakes. 

When looking at technology’s use in geopolitics and warfare, understanding the power structures is key to gaining a clear overview. Finding the “goodness” in ultra-specific uses of technology does little in the attempt to offset the “bad.”

For the human beings whose lives have been made more challenging and conditions dire as a result of the implementation of technology in domination, warfare, and systems of supremacy, there is not much that can be rationalized for the better. And the same can be said of other entities that use advantages (geopolitical, technological, or otherwise) in order to assert control over others who are in relatively more disadvantaged and vulnerable positions. To divorce the helpful and harmful applications of technology is to lose oversight of the bigger picture of not only how tech could be used one day, but how it is actually being used right now.

]]>
Normalizing surveillance in daily life https://globalvoices.org/2025/09/15/normalizing-surveillance-in-daily-life/ Mon, 15 Sep 2025 01:30:52 +0000 https://globalvoices.org/?p=843591 How technology is used to supercharge monitoring and control

Originally published on Global Voices

 

Grid in the sky above a silhouette of mountains. Image by Safa and Liz Carrigan, with visual elements from Yiorgos Bagakis, Alessandro Cripsta, and La Loma, used with permission.

Image by Safa and Liz Carrigan, with visual elements from Yiorgos Bagakis, Alessandro Cripsta, and La Loma, used with permission.

This article was written by Safa for the series “Digitized Dividesand originally published on tacticaltech.org. An edited version is republished by Global Voices under a partnership agreement. 

Surveillance, monitoring, and control have been used historically and continue to be used currently under the guise of protection and security, but, as professor Hannah Zeavin explained, “[C]are is a mode that accommodates and justifies surveillance as a practice, framing it as an ethical ‘good’ or security necessity instead of a political choice.”

Tactical Tech is based in Berlin, the former capital city of international espionage. The Ministry for State Security (also referred to as the Stasi) was the state security and secret police of the former East Germany (German Democratic Republic or GDR) from 1950 until 1990. They are known as one of the most repressive police organizations to have existed. Upon the dissolution of the Stasi, thousands of protesters occupied their Berlin headquarters and prevented them from destroying their records. What survives includes nearly two million photos and so many files that if they were laid out flat, they would be more than 111 kilometers (70 miles) long. 

The Stasi also conducted international operations that had lasting effects abroad. They extensively trained the former Syrian Mukhabarat (secret police) of the now fallen Assad regime, under Hafez al-Assad,[in] methods of interrogation, infiltration, disinformation and brutal extraction of confessions were meticulously hammered into the minds of Syrian intelligence officials by senior Stasi agents.” With the fall of the GDR and the Berlin Wall, the Stasi was dissolved and East and West Germany reunified. 

While Germany has taken some steps to reckon with its past, surveillance is still ever-present. German states have been using Palantir software to support population surveillance efforts since 2017. In 2021, Human Rights Watch raised concerns over two laws that were amended, which granted more surveillance powers to the federal police and intelligence services. While Germans have experienced a long and persistent history of surveillance and have gained a reputation for taking privacy issues very seriously, this perspective has changed over time. A 2017 study that surveyed over 5,000 Germans on various privacy-related topics found that “Germans consider privacy to be valuable, but at the same time, almost half of the population thinks that it is becoming less and less important in our society.” 

Although the Stasi are world-famous for their surveillance and data collection, today’s law enforcement landscape is a smorgasbord of data. The Stasi versus NSA visualization, developed in 2013, shows how data collected from the two entities compares, projecting that “the NSA can store almost 1 billion times more data than the Stasi.” Using modern technologies like algorithms and access to all of the digitized data from health conditions to search queries and private chats, it is easier than ever to get not just a glimpse but a full picture into the lives of nearly anyone. 

As Amnesty International reported, “[T]he Stasi archive is a timely warning of the potential consequences of unchecked surveillance. It shows how quickly a system for identifying threats evolves into a desire to know everything about everyone.” Tactical Tech’s project “The Glass Room” has explored this topic through the years, describing: “There is a growing market for technologies that promise increased control, security, and protection from harm. At the same time, they can normalize surveillance at a macro and micro level— from the shape of a child’s ear to satellite images of acres of farmland. Often, those who need the most support may have the least control over how or when their data is being used.” 

The Glass Room’s “Big Mother exhibit adapts the Big Brother imagery to a more nurturing figure — a mother — exemplifying how people can easily let their guards down when data tracking is framed as helpful and caring. This can be seen in the advertisements for tech products such as devices that help people monitor elderly relatives via an app, fertility tracking apps, and refugee and asylum-seeker biometrics registries. The US and Israel are among the world’s biggest suppliers of surveillance tech, including the US-based Palantir and Israel’s NSO group and Elbit Systems, used by governments in places like the US-Mexico border, Central America, and Europe

Monitoring minors

The so-called ed-tech industry has been gaining traction for years, even before the COVID-19 pandemic. “Ed-tech” describes the numerous technological innovations that are marketed to schools that are purported to benefit students, teachers, and school administrators. Not all ed-tech is the same, and there are efforts to bring digitization to schools to reduce the digital divide, especially experienced in more rural and low-income areas. With that said, some of the digital tools used by school administrators have the potential to act equally as tools of surveillance. These include recording children at daycare, using AI to analyze body and eye movements during exams, and monitoring student social media

So much monitoring is not without consequence, especially for traditionally marginalized groups. One study reported that student surveillance technologies put Black, Indigenous, Latine/x, LGBTQ+, undocumented, and low-income students, as well as students with disabilities, at higher risk. In 2023, the American Civil Liberties Union (ACLU) interviewed teens aged 14–18 to capture the experiences of surveillance in schools. One participant reflected: “…[W]e treat kids like monsters and like criminals, then … it’s kinda like a self-fulfilling prophecy.” In 2017, the Electronic Frontier Foundation warned: “Ed tech unchecked threatens to normalize the next generation to a digital world in which users hand over data without question in return for free services, a world that is less private not just by default, but by design.” Some students and parents have pushed back, and in some cases successfully blocked certain technologies from being used in schools.

Eyes everywhere

Workers are also feeling watched. From 2020 to 2022, the number of large employers who used employee monitoring tools doubled. And it isn’t only the well-known control mechanisms Amazon uses on their warehouse workers — the average office worker may also be affected. A 2023 study of 2,000 employers found that over three-quarters of them were using some form of remote work surveillance on their workers. Employers are keeping track of their employees using methods such as internet monitoring, fingerprint scanners, eye movement tracking, social media scraping, and voice analysis, among others. “We are in the midst of a shift in work and workplace relationships as significant as the Second Industrial Revolution of the late 19th and early 20th centuries,” according to the MIT Technology Review. “And new policies and protections may be necessary to correct the balance of power.”

Even cars can be turned into tools of surveillance. Getting to work and dropping off the kids at school may be taking place in a data mining automobile. In 2023, 84 percent of car brands were found to sell or share personal data with data brokers and businesses. That same year, news broke that Tesla employees had been sharing among themselves in chat rooms private camera recordings captured in customers’ cars. This didn’t happen only once or twice but many times from 2019 to 2022. The videos included nudity, crashes, and road-rage incidents; some were even “made into memes by embellishing them with amusing captions or commentary, before posting them in private group chats.” In 2024, Volkswagen was responsible for a data breach that left the precise location of hundreds of thousands of vehicles across Europe exposed online for months. In the US, researchers found that some license plate reader cameras were live-streaming video and car data online. 

In early 2025, Tesla executives handed over dashcam footage to Las Vegas police to help find the person responsible (who used ChatGPT to plan the attack) for the Tesla Cybertruck that exploded outside the Trump International Hotel. While this particular case and the actions of Tesla executives were generally applauded in the media, it does raise questions about the broader issue of surveillance, the application of the law, and the limits of privacy.

Researchers noted about data tracking more broadly that “tactics and tools already used by law enforcement and immigration authorities could be adapted to track anyone seeking or even considering an abortion.” Finding more ways to document and track people can also translate into ever more menacing ways under different political administrations and in contexts that have even fewer protections for marginalized groups. 

]]>
‘Smart’ (or Machiavellian?) surveillance: The power of terminology https://globalvoices.org/2025/09/03/smart-or-machiavellian-surveillance/ Wed, 03 Sep 2025 06:30:10 +0000 https://globalvoices.org/?p=842852 Technology is used to supercharge weapons of mass oppression

Originally published on Global Voices

Yellow background and outline of a face, signifying facial recognition technology. Image by Safa, with visual elements from La Loma, used with permission.

Image by Safa, with visual elements from La Loma, used with permission.

This article was written by Safa for the seriesDigitized Divides originally published on tacticaltech.org. An edited version is republished by Global Voices under a partnership agreement. 

The terms that are used to describe technology can shape how we think about it. The word “smart” has a positive connotation, in most cases, but when it comes to technology, “smart” is usually used interchangeably with “efficient.” Imagine if instead of calling systems of surveillance “smart,” we called them “Machiavellian” — how might that change our discourse, acceptance, and adoption of them? 

Unreliable systems

Tools of monitoring and control, such as CCTV, rely on facial recognition technology, which automatically identifies unique facial data, including measurements like the distance between the eyes, width of the nose, depth of eye sockets, shape of cheekbone, and length of jawline. Facial recognition is used by governments, police, and other agencies around the world, with significant results. 

One unprecedented operation by US law enforcement resulted in hundreds of children and their abusers being identified in just three weeks. This technology has also been used to find missing and murdered Indigenous people (MMIP), helping 57 families find answers in just three years. While these results are indeed remarkable and reveal the ways in which the application of technologies can be used to help people, there have also been numerous cases of facial recognition being used by US law enforcement in ways that have harmed people. 

An app called CBP One, which is required by asylum seekers at the US-Mexico border, includes a requirement for people to register themselves in a facial recognition system. But, that system “[fails] to register many people with darker skin tones, effectively barring them from their right to request entry into the US.” The systems centralizing data of asylum-seekers and migrants make longitudinal tracking of children possible. Facial recognition technologies are also used by ICE (the US’s Immigration and Customs Enforcement agency) to monitor and surveil people awaiting deportation hearings. 

In one study on facial recognition systems, MIT researcher Joy Buolamwini found that “darker-skinned females are the most misclassified group (with error rates of up to 34.7 percent). The maximum error rate for lighter-skinned males is 0.8 percent.” Harvard researcher, Alex Najibi, described how “Black Americans are more likely to be arrested and incarcerated for minor crimes than White Americans. Consequently, Black people are overrepresented in mugshot data, which face recognition uses to make predictions, explaining how Black Americans are more likely than White Americans to become trapped in cycles and systems of racist policing and surveillance. 

This sentiment is echoed in a report by the project S.T.O.P. — The Surveillance Technology Oversight Project. The UK and China are also among the countries that practice “predictive policing.” One researcher focusing on China describes it as “a more refined tool for the selective suppression of already targeted groups by the police and does not substantially reduce crime or increase overall security.” So the issue here is not simply about flawed datasets; it is discrimination that already exists in society, where people who hold positions of power or have police or military force can use technology to enhance their oppression of particular groups of people. Larger datasets will not remedy or negate the problem of people acting upon discrimination, racism, or other types of bias and hatred. 

Algorithms are created by people (who inherently have their own biases) and are developed using our data. The tools trained on our data can be used to harm other people. Algorithms are also used by governments, law enforcement, and other agencies worldwide. Tools and services from Google, Amazon, and Microsoft have all been used by Israel in its war on Gaza. In the United States, algorithms have been used to score risk levels for individuals who have committed crimes, assessing their likelihood of committing future crimes. But these algorithms have been found by researchers to be “remarkably unreliable” and include a significant amount of bias in their design and implementation.

In Spain, an algorithm was used to predict how likely a domestic abuse survivor would be to be abused again, with the intention to distribute support and resources to people who need it most urgently, in an overburdened system. But the algorithm isn’t perfect, and over-reliance on such flawed tools in high-stakes situations has had dire consequences. In some cases, survivors mislabelled as “low risk” have been murdered by their abusers despite their best efforts to seek help and report the abuse to authorities.

In the Netherlands, tax authorities used an algorithm to help them identify child care benefits fraud, with tens of thousands of lower-income families being penalized, resulting in many falling into poverty and even more than a thousand children being wrongfully put into foster care. “Having dual nationality was marked as a big risk indicator, as was a low income [… and] having Turkish or Moroccan nationality was a particular focus.” 

Israel surveils and oppresses Palestinians

Israel’s surveillance industry is world famous. A 2023 report by Amnesty International mapped the visible Israeli surveillance system and found one or two CCTV cameras every five meters in Jerusalem’s Old City and Sheikh Jarrah in East Jerusalem. 

Since 2020, Israel’s military-run “Wolf Pack” has been in use; this is a vast and detailed database profiling virtually all Palestinians in the West Bank, including their photographs, family connections, education, and more. The Wolf Pack includes “Red Wolf,” “White Wolf,” and “Blue Wolf” tools: 

  • Red Wolf: The Red Wolf system is part of the Israeli government’s official CCTV facial recognition infrastructure to identify and profile Palestinians as they pass through checkpoints and move through cities. It has been reported that Israel’s military uses Red Wolf in the Palestinian city of Hebron. According to a project by B’Tselem and Breaking the Silence, the Israeli military has set up 86 checkpoints and barriers across 20 percent of Hebron, referred to as “H2,” that is under Israeli military control. The checkpoints are hard to avoid in H2. As Masha Gessen wrote, Palestinians living there “go through a checkpoint in order to buy groceries and again to bring them home.” According to UNRWA, 88 percent of children cross checkpoints on their way to and from school.
  • White Wolf: Another app, called White Wolf, is available to official Israeli military personnel who are guarding illegal settlements in the West Bank, which allows them to search the database of Palestinians. Since Israel’s war on Gaza began after the October 7, 2023, attacks by the Islamic Resistance Movement (aka Hamas) on Israelis, Israel has rolled out a similar facial recognition system registry of Palestinians in Gaza. 
  • Blue Wolf: Using the app called Blue Wolf, the Israeli military has been carrying out a massive biometric registry of Palestinians, often at checkpoints and by gunpoint, sometimes at people’s private homes in the middle of the night. Israeli soldiers take pictures of Palestinians, including children, sometimes by force. Israeli soldiers also note within the app any “negative impressions [they] have of a Palestinian’s conduct when encountering them.” One source added, “It’s not that the military has said, let’s make the Blue Wolf so [the Palestinians] can pass through more easily. The military wants to enter the people into its system for control.”

A 2025 article also revealed how the Israeli military was using a large language model (such as that used by tools like ChatGPT) to surveil Palestinians. One Israeli intelligence source stated, “I have more tools to know what every person in the West Bank is doing. When you hold so much data, you can direct it toward any purpose you choose.” While the Israeli military is not the only government-sanctioned example of training AI tools on civilian data, it offers an important insight into how the latest technologies can be adopted for widespread monitoring and control.

As researcher Carlos Delclós said, “Privacy is not merely invaded; it is obliterated, as human lives are fragmented into datasets optimised for corporate gain,” and the same message can be extended to political gain. Regardless of whether we call technology by positive or negative terms, at the end of the day, the technology itself cannot be separated from the operators (i.e. humans) who deploy it. If the people who use these technologies are also inhabiting societies and working within systems that have documented concerns of discrimination and/or control, it seems quite possible that the tech will be used to cause harm. We don’t even need to imagine it. We can simply look around with both eyes open.

]]>
Podcast: Safa, from the Palestinian diaspora, on how revealing her identity can make her feel unsafe https://globalvoices.org/2025/05/16/podcast-safa-from-the-palestinian-diaspora-on-how-revealing-her-identity-can-make-her-feel-unsafe/ Fri, 16 May 2025 04:47:05 +0000 https://globalvoices.org/?p=834365 “Where Are You Really From?”: A podcast that explores identities

Originally published on Global Voices

The words where are you really from are in white text on a black background. The center is overlaid by an illustration of the globe

Image made by Ameya Nagarajan for Global Voices on Canva Pro.

Where Are You REALLY From?” is a new podcast series from Global Voices that emerged from a panel at the December 2024 Global Voices summit in Nepal, where members of the Global Voices community shared their experiences of dealing with other people's perceptions about their diverse and complex origin stories. In each episode, we invite our guests to reflect on the assumptions that lie behind the question, “But where are you really from?” and how they respond.

The podcast is hosted by Akwe Amosu, who works in the human rights sector after an earlier career in journalism and is also a coach and a poet. She is a co-chair of the Global Voices board.

The transcript of this episode has been edited for clarity.

Akwe Amosu (AA): Hello and welcome to Where Are You Really From? A podcast that explores identities. I'm Akwe Amosu and today I'm speaking to Safa. Safa, why do people ask you that question?

Safa: Thank you so much for having me. People ask me that question because I am Palestinian, but I am diaspora Palestinian. I was born and raised not in Palestine, but in the United States, and now I live in Europe. And so people are often asking me that because they can tell that I have a different background I guess.

AA: And what's your reaction to being asked that question? What does it make you feel?

Safa: Oftentimes it depends on the situation but oftentimes I feel insecure because typically they start with oh where are you from and I give them an answer and then when they ask no but where are you really from that means they were not satisfied with my answer. It could be that the reason I gave my initial answer, maybe I said, oh, I'm from around here or I'm from wherever is because I'm kind of assessing the situation. I'm assessing like, okay, am I in a grocery checkout line and I just want to get out of here? Is there like a creepy person who's talking to me and I just want to end the conversation? Or is it someone fun at a party? And I actually do want to tell them, or I want to test the waters with something small. So when they ask that question, it feels very invasive.

AA: Is there something about being asked to reveal who you are that feels dangerous?

Safa: Yes. Yes, especially as a Palestinian. My entire life, it has been a precarious identity to share because I have always lived in spaces where people attribute certain ideologies or terms to me that are completely inaccurate and are actually quite discriminatory. For example, when I've told people before, “Oh, I'm Palestinian,” they have said to me, “Oh, so you're a terrorist.” :”Oh, so you're anti-Semitic,” which I am not. Absolutely not. I’m not either of those things and neither are most of the Palestinians I've ever known in my whole life. So it feels quite dangerous because when people have those ideas about you, it's not only offensive, but they act in a different way when they think that you are threatening. They act in a threatening and sometimes potentially violent way. So it is very scary.

A dark haired woman in sunglasses is smiling at the camera. She's wearing a yellow top with a black scarf. There are palm trees, rocks, and buildings in the background.

Safa. Used with permission.

AA: So how do you usually answer when you get asked this question?

Safa: So if I feel safe, like let's say I'm at a party and someone's wearing a keffiyeh, or they've got like watermelon earrings on or something, I'm like, okay, I can tell them I'm Palestinian. You know, both of my parents are Palestinian. My mom was even born there, and I am really proud of that. I'm very proud of it. But if I'm not sure of the situation, or maybe I've identified that they said something that already felt a little bit, I'm not really sure about this. I might test the waters. So what I might say is “I'm Arab,” which is true. I might say “I am Jordanian,” of which technically I'm a passport holder. So technically that is correct on a nationality level, not on an identity level.

But if I feel very, very unsafe where I feel like, okay, They might be just straight up racist, or it's not a good situation, or I just don't want to engage. I might just say, “Oh, I was born in the US,” or I'm from this particular town in the US. And sometimes that satisfies people.

I also, very rarely, I've taken a sort of page out of my mother's book, which is just making something up completely. If I feel really unsafe or really annoyed that day, I might just say, “Oh, I'm French” or, “Oh, I'm Greek,” or just whatever, just to get them away from me.

AA: You're really revealing the complexity of having to answer these questions! Do you think that the people who ask you these questions know how complex it is and is there a way for them to ask the question that would be okay for you?

Safa: So I actually think usually people don't realize it's such a tricky question or it's such a loaded question. I've actually had conversations with people in the US and in Europe to kind of explain to them why this question feels so invasive or so sensitive. And their reaction has always been surprise, because, for them, some of these people just haven't had to think about it for more than a second, what safety means or what feeling like being under a microscope might feel like or being part of a group that's particularly hated against. And so I do think that the intentionality is oftentimes just like from a place of curiosity. But, you know, I do think that people sometimes take their curiosity too far because, at the end of the day, it is none of their business.

I think what I would like to see, and I've experienced some nice examples where I'll tell someone where I'm from, whatever answer I decide to answer, and I can tell that they look dissatisfied with my answer, but they don't inquire further, right? And I think in those moments, they recognize that I am not sure if I feel safe or comfortable in that setting, for whatever reason, let's say it's in front of a big group. And then I'll have people come to me one on one after to be like, so where did you say you were from? And maybe I'll give a different answer, maybe the same one.

I've also had people who just kind of focus on other attributes of my identity that are maybe more relatable to them. So if they ask where I'm from and they're not satisfied with my answer, they might ask me, “Oh, what do you do for your job?” And I like that. Or “Yeah, did you notice the weather? What do you like to do when it's sunny?” These questions that then get to the heart of who I am in a different way. Because of course my identity is an important part of the puzzle, but there are also other interesting things about me. And that's how I try to approach it with other people too.

I've also had people be a bit sneaky, which I think is fun and fine. And if they're not satisfied with my answer, they'll be like, “Oh, do you also speak other languages?” And that kind of gives it away too, to some extent, which I'm okay with, because then they're still inquiring in more of a curious and compassionate way, which I really appreciate.

AA: Is there anything else you want to say?

Safa: I do have to acknowledge that the moment we're having this conversation is a really difficult moment for Palestinians and actually for Arabs everywhere. We're really hurting right now. And I think in the same way that people… sometimes don't know how to breach the topic of where are you from. They also don't know how to breach the topic of talking to someone whose people are experiencing a genocide or also other types of tragedies and stress. And maybe just to share with those people to say, it's okay to be sloppy and It's okay as long as it comes from a place of compassion and that you are communicating that awkward feeling. Maybe instead of not saying anything, you might say, oh, wow, oh, you're Palestinian. I wish I had the right words to say right now, or I wish I knew how to talk about what's happening right now. I think there are ways to be so compassionate, you know, that open the door for more understanding, for more exchange.

AA: Thank you, Safa.

Safa: Thank you so much.

Listen to other episodes here: Where Are You REALLY From?

]]>
In each episode, we invite our guests to reflect on the assumptions that lie behind the question, “But where are you really from?“ and how they respond. In each episode, we invite our guests to reflect on the assumptions that lie behind the question, “But where are you really from?“ and how they respond. Akwe Amosu full false 9:54
Namesakes in Gaza: Carrying the martyrs with us in diaspora https://globalvoices.org/2024/12/31/namesakes-in-gaza-carrying-the-martyrs-with-us-in-diaspora/ Tue, 31 Dec 2024 04:30:13 +0000 https://globalvoices.org/?p=826293 Since 2023, I’ve carried Safas in Gaza everywhere with me, guiding my big decisions and feeling the weight of responsibility to live 

Originally published on Global Voices

‘Unwritten memories: It should have been me/it could have been me.’ Hand-stitched Palestinian tatreez and photo collage on canvas, 2024, made by the author. Photo by the author, used with permission.

When the list of people killed was released from the Gaza Ministry of Health, I couldn't help but search for my name. How many of me did I find? The first time I checked, in November 2023, there were 19 of me killed — later, that number grew to over 50 known killed Safas, ranging from ages 1 to 82.

How are we connected? And what will they teach me without us meeting? 

A Palestinian duality

I’ve often felt pulled in two directions — a duality I feel as a Palestinian in the diaspora: yearning and anguish. On the one hand, a yearning to have had childhood memories in Palestine — why not me? On the other hand, an anguish at the thought of what could have been had our family fled to Gaza and it were me there right now — what if it had been me?

Why not me?

I often imagine what my life could have been like if I had had childhood memories in Palestine … especially in my fantasy of a free Palestine. Maybe we would have visited the Mediterranean Sea for our family trips.

What if it had been me?

I also often think about how my reality of exile could have been a different fate. Had any of my grandparents made a different decision in 1948 and went south, I could have also been in Gaza right now. 

As the daughter and granddaughter of Palestinian refugees, I've often found myself thinking about what my life could have been had they fled south during the Nakba.

We Palestinians in the diaspora have been dealing with survivor’s guilt, especially during the genocide in Gaza. Diana Safieh wrote: “The constant flow of distressing news from Palestine — images of destruction, stories of loss and accounts of human rights abuses — is overwhelming. And I feel guilty mentioning sleeplessness because it’s nothing compared to what people back home are experiencing all the time.” 

Our shared namesake 

It brings me comfort to believe that Safas have a shared connection. While it may be a mere coincidence that we share the same name, I grew up reflecting on the meanings of my name, often. 

There are different ways to translate Safa (spelled in Arabic: صفاء). Depending on who you talk to, it may be translated as “clarity,” “serenity,” or “tranquility.” It's also one of the most famous holy sites for Muslims on the pilgrimage: Al-Safa and Al-Marwa. Safa shows up in many places, such as the Umm Safa village in Palestine. In southern Syria, there is a volcanic mountain bearing our name.

I've often wondered growing up if that clarity, tranquility, and serenity described me or if it was aspirational — something I'd never quite achieve but always try to. Oftentimes, I felt very chaotic, unbalanced, and confused. 

Were the Safas in Gaza, those who reached my age or older, also confused like I was or did some of them have it all figured out? I especially yearn to speak to 82-year-old Safa. She was just barely older than my mom is now. What were her greatest achievements and most precious memories? What advice would she have given to us other Safas if she were still alive?

Name (AR) Name (EN) Gender Age Birthdate
صفا سليمان سلمان النجار Safa Suleiman Salman al-Najar female 1 2022-04-25
صفا بلال محمد الرملاوى Safaa Bilal Mohammed Al-Ramlawi female 2 2021-10-12
صفاء عيسى ياسين السراج Safaa Issa Yassin Al-Sarraj male 5 2019-01-01
صفاء خالد جهاد ابوجباره Safaa Khaled Jehad Abu-jebara female 5 2018-10-11
صفا مثقال علي ابوسيف Safa Muthqal Ali Abosaif female 8 2015-11-03
صفا اسعد علي عروق Safaa Asaad Ali Arouq female 9 2014-07-09
صفا علاء عمر النمر Safaa Alaa Omar Al-Nimr female 12 2011-05-18
صفاء عدنان عبدالكريم ابومصطفى Safaa Adnan Abd Al-Karim Abu-Mustafa female 14 2009-05-10
صفاء شريف محمد الدلو Safa Sharif Mohammed Al-Dalu female 14 2009-02-13
صفا ياسر عايدي وافي Safa Yasser Aidi Wafi female 16 2008-05-19
صفا ايمن عبدالكريم عماره Safaa Ayman Abd Al-Karim Amara female 18 2005-03-03
صفاء خليل عبدالحافظ البغدادي Safaa Khalil Abd Al-Hafiz Al-Baghdadi female 18 2005-04-17
صفاء محمد كامل جنديه Safaa Mohammed Kamel Jundiyeh female 19 2004-08-19
صفاء محمد يوسف شحيبر Safaa Mohammed Yusuf Shahibir female 19 2004-03-05
صفاء جهاد موسى خليفة Safaa Jihad Mousa Khalifa female 19 2005-03-18
صفا رافت جاسر الكحلوت Safa Raafat Jaser Al-Kahlout male 23 2000-07-23
صفاء عمر حامد البطنيجي Safaa Omar Hamed Al-Batniji female 24 1999-11-05
صفاء منذر عبدالحميد زينو Safaa Munthr Abdalihamaid Zeino female 25 1998-01-06
صفاء جهاد التلباني Safaa Jehad Altlbanei female 25 1999-01-01
صفاء حسن محمد عماره Safaa Hassan Muhammad Ammarah female 25 1998-08-14
صفاء اكرم محمد ابوعيش Safaa Akram Muhammad Abu-aish female 26 1997-06-20
صفاء نزار جميل حسونة Safaa Nizar Jameel Hassouna female 26 1997-01-20
صفاء محمود محمد التترى Safaa Mahmoud Muhammad Alttra female 27 1995-10-28
صفاء يوسف فراج فراج Safaa Youssef Faraj Faraj female 27 1996-07-12
صفاء جميل محمود موسى Safaa Jameel Mahmoud Moussa female 28 1995-03-10
صفاء حسن خليل ابوسيف Safaa Hassan Khalil Abosaif female 30 1993-09-10
صفا محمود محمد الشوربجي Safa Mahmoud Muhammad Alshoarabji female 30 1993-05-15
صفاء صابر محمود الزريعي Safaa Sabr Mahmoud Alzriai female 30 1993-01-11
صفاء فؤاد عبدالكريم كرم Safa Fawad Abd Al-Karim Karam female 30 1993-05-12
صفاء سهيل مراد الغندور Safaa Suhail Marad Alghnadoar female 31 1991-12-14
صفاء طلال محمد البياع Safaa Talal Muhammad Albiaa female 32 1991-06-10
صفاء عبدالرحيم خليل أبوشقرة Safaa Abdalrahaiam Khalil Aboshqurah female 33 1990-05-11
صفاء كمال محمد علي ابوكميل Safaa Kamal Muhammad Ali Abukamil female 34 1989-05-15
صفاء جمال احمد مشتهى Safaa Jamal Ahmed Moshtaha female 34 1989-03-15
صفاء جودت مجدي منصور Safaa Jodt Mjadi Munasoar female 36 1988-04-22
صفاء حرب سالم صباح Safaa Harab Salem Sabah female 37 1986-09-12
صفاء عبدالسميع يونس الكفارنه Safaa Abdul-sami Yunus al-Kafarna female 37 1986-11-28
صفا منصور عبد صبح Safa Munasoar Abd Sbh female 38 1985-08-27
صفاء عادل عياده العجله Safaa Adel Aiadah al-Ajlah female 38 1986-01-04
صفاء ابراهيم محمد جراده Safaa Ibrahem Muhammad Jradah female 39 1984-03-19
صفاء علي محمد ابووردة Safaa Ali Muhammad Abu-warda female 39 1984-02-26
صفاء هاني ابراهيم المدهون Safaa Hani Ibrahem al-Madhoun female 39 1983-12-09
صفا صهيب حسام الفرا Safa Suhib Husam al-Farra female 40 1984-07-04
صفاء محمد أحمد السراج Safaa Mohammed Ahmed Al-Sarraj female 41 1982-07-28
صفاء الدين محمد سلمان التلباني Safaa Addeen Muhammad Salman Altlbanei male 41 1982-05-24
صفاء نزهات صالح جحا Safaa Nazuhat Saleh Jha female 44 1979-10-29
صفاء سالم صبح ابوقايدة Safaa Salem Sbh Aboqaidah female 45 1979-03-12
صفاء عبدالرزاق خليل عياش Safaa Abdalrzaq Khalil Aiish female 46 1977-02-27
صفاء عبدالجواد محمد ابوراس Safaa Abdul-jawad Muhammad Aburas female 48 1975-07-26
صفاء احمد خليل اسماعيل Safaa Ahmed Khalil Ismail female 49 1974-08-19
صفا عبد الرؤوف عايش اللحام Safa Abd Arraouf Ayesh Allham female 49 1974-06-20
صفاء صبحى سلمان سويدان Safaa Sbha Salman Suwaidan female 51 1972-03-15
صفا حسن محمد ابوسخيل Safa Hassan Muhammad Aboskhil female 59 1964-04-12
صفاء مصطفي حسن الدن Safaa Mustfi Hassan Aldn female 70 1953-04-24
صفا محمد عبدالله درغام Safa Muhammad Abdullah Drgham female 73 1950-01-01
صفاء واكد وهدان أبو عقلين Safaa Wakd Whadan Abu Aqlain female 82 1941-08-10

Maya Angelou famously said, “I come as one, but I stand as 10,000.” I used to imagine that meant I carried with me everywhere my late grandparents, aunts, uncles, and ancestors, as well as loved ones who were still alive but not near me. But since 2023, I’ve carried Safas in Gaza everywhere with me, guiding my big decisions and feeling the weight of responsibility to live. 

]]>
Israel is using surveillance technology to subjugate and target Palestinians https://globalvoices.org/2024/10/11/israel-is-using-surveillance-technologically-to-subjugate-and-target-palestinians/ Fri, 11 Oct 2024 05:06:42 +0000 https://globalvoices.org/?p=822001 AI supercharges human rights violations against civilians

Originally published on Global Voices

Hebron’s Al-Shuhada Street checkpoint with at least 8 CCTV cameras and the Smart Shooter in view. Photo by AV, used with permission. 2024.

Data collection and technology can have harmful applications, especially when used to monitor and subjugate marginalized people. This can be seen most clearly in how Israel has used technology in its war against Palestinians. Israel is using data collected from Palestinians to train AI-powered automated tools that have been deployed against Gaza and across the West Bank.

Israeli AI-supercharged surveillance tools and spyware, including Pegasus, a malware program, and AI weaponry, including the Smart Shooter and Lavender, have received both condemnation and interest. Graduates of the Israeli military’s elite intelligence unit, Unit 8200, are so coveted by surveillance and military tech companies that there is a term “8200-to-tech pipeline”.

In 2024, UN experts deplored Israel’s use of AI to commit crimes against humanity in Gaza. Regardless of their ongoing use of AI for human rights violations, that same year, Israel signed a global treaty on AI developed by the Council of Europe for safeguarding human rights.

A ‘gamified’ surveillance system

Israel’s CCTV systems, called “Mabat 2000”, were first installed throughout Jerusalem in the year 2000 but have seen significant upgrades in more recent years. A 2023 report by Amnesty International mapped the visible Israeli surveillance system and found one or two cameras every five meters in Jerusalem’s Old City and Sheikh Jarrah. One Palestinian resident said, “Every time I see a camera, I feel anxious. Like you are always being treated as if you are a target.” Israeli CCTV cameras are also mounted on checkpoints and barriers and clustered on buildings and towers across the occupied West Bank.

A Palestinian home in Sheikh Jarrah, with an Israeli settlement on top. There are approximately eight CCTV cameras visible in the photo. Clusters of surveillance cameras can be identified on the pole to the right and surrounding the Star of David and Israeli flags at the top. Photo by the author, used with permission. 2024.

Since 2020, the Wolf Pack of surveillance tech has been rolled out by Israel across the Occupied Palestinian Territories. Using the app called Blue Wolf, Israel carried out a massive biometrics registry of Palestinians, often at checkpoints and by gunpoint, sometimes at people's private homes in the middle of the night.

The gendered aspect of surveillance was noted in a 2021 report by 7amleh, with one female interviewee explaining that she would sleep in her hijab, feeling that she could not experience privacy inside her home.

Israeli soldiers took pictures of Palestinians, including children, for cataloguing, and the process was gamified, giving “a weekly score based on the most amount [sic] of pictures taken. Military units that captured the most faces of Palestinians on a weekly basis would be provided rewards such as paid time away.” This tool was playfully referred to as a “Facebook for Palestinians” by Israeli soldiers.

Red Wolf is part of the CCTV facial recognition infrastructure to identify Palestinians as they pass through checkpoints and move through cities. Another app called White Wolf is available to Israelis illegally settling in the West Bank, which allows them to search the database of Palestinians. Somehow, the increased monitoring and surveillance have failed to capture the crimes committed by Israeli settlers against Palestinians. Since October 2023, Israel has rolled out a similar facial recognition system registry of Palestinians in Gaza.

A cluster of CCTV cameras on Via Dolorosa Street in Jerusalem’s Old City. Photo taken by the author, used with permission. 2024.

AI-supercharged weapons

In 2021, Google and Amazon jointly signed an exclusive billion-dollar contract with the Israeli government to develop ‘Project Nimbus’, which is meant to advance technologies in facial detection, automated image categorization, object tracking, and sentiment analysis for military use — a move that was condemned by hundreds of Google and Amazon employees in a coalition called No Tech for Apartheid.

While many of the Big Tech companies have contracts with Israeli military and intelligence agencies, Project Nimbus has faced especially harsh criticism because of the symphony of alarms raised about Israel’s human rights violations brought forth prior to October 2023 by the United Nations, Human Rights Watch, Save the Children and Amnesty International.

Israeli intelligence units have been relying ever more heavily on AI tools to “rank civilians and civilian infrastructure according to their likelihood of being affiliated with militant organizations” within Gaza, speeding up the ranking process from a full year when completed by a person, to half a day by an AI tool.

AI-powered systems, ‘Lavender’ and ‘The Gospel’ (‘Hasbora’), have been designated as a “mass assassination factory” in Gaza with minimal human oversight where “emphasis is on quantity and not on quality”. Another AI-powered tool called “Where's Daddy” tracks selected Palestinians so that they would be bombed when they entered their home — also killing their families and neighbours; thousands of adults and children who were not involved in fighting have been murdered. The system identifies targets based on various criteria, one of which is whether the person is in a WhatsApp group with another suspected individual.

Social scoring technology, such as these, has been banned by the European Union.

Drone terror

Drones have been used by Israel against Palestinians for more than a decade, sometimes for surveillance and other times for strikes that have led to traumatic amputations — although drone use was considered a “well-known secret” in Israeli society for years. As early as 2009, Human Rights Watch reported on Israel’s use of armed drones in Gaza.

In 2021, Israel started deploying “drone swarms” in Gaza to locate and monitor targets. In 2022, Omri Dor, commander of Palmachim Airbase, said, “The whole of Gaza is ‘covered’ with UAVs that collect intelligence 24 hours a day.”

Since October 2023, Israel’s killing of Palestinians has increased dramatically, causing Gaza to be called a “graveyard for children” and “a living hell”. Technology has played a major role in increasing damage and targets, including drones. Hybrid drones such as “The Rooster” and “Robodogs” can fly, hover, roll, and climb uneven terrain. Machine gun rovers have been used to replace on-the-ground troops. There have been allegations that Israeli sniper drones have played recordings of crying infants to lure targets into the open in Gaza.

Drones have been connected to psychological distress for Palestinians because of the 24/7 buzzing sounds and fears of being targeted.

Israel intensifying AI-powered attacks

As early as October 13, 2023, experts were calling Israeli attacks on Gaza a “potential genocide” and a “textbook case of genocide”. Judges of the International Court of Justice said in January 2024 that “at least some of the acts and omissions alleged […] to have been committed by Israelis in Gaza appear to be capable of falling within the provisions of the (Genocide) Convention.”

In July 2024, the World Court found Israel responsible for apartheid. The United Nations Office for the Coordination of Humanitarian Affairs has reported over 41,000 Palestinians killed, over 96,000 Palestinians injured, and nearly half a million Palestinians facing catastrophic levels of food insecurity in Gaza.

In September 2024, Israel was suspected to be responsible for the exploding pagers attack in Lebanon, which killed at least 37 and injured approximately 3,000 people. In just three days, 90,000 Lebanese were displaced, fleeing Israeli attacks. Within one week, over 1,000 Lebanese were killed in the attacks. Israel continues to increase airstrikes on Lebanon, as well as Syria and Yemen.

]]>
The denial of Palestinian childhood https://globalvoices.org/2024/04/24/the-denial-of-palestinian-childhood/ Wed, 24 Apr 2024 18:53:26 +0000 https://globalvoices.org/?p=811204 ‘Unchilding from birth’ reveals Israel’s systemic oppression of Palestinians

Originally published on Global Voices

A playground in the West Bank. Picture taken by Justin McIntosh, August 2004. Wikimedia Commons. (CC-BY-2.0).

Since Israel's latest aggression on Gaza began in October — described as  “a mass assassination factory — the literal and actual dehumanization of Palestinians has intensified. UNICEF has labeled Gaza “a graveyard for children” and “a living hell,” as a result of Israel's severe and unrelenting attacks. 

UN Special Rapporteur Francesca Albanese referred to the ‘deliberate unchilding from birth’ of Palestinians under Israel’s “forever occupation” which has caused “never-ending harm” to the population. However,  Israeli violence against Palestinian children is not a recent phenomenon. 

‘Unchilding’ Palestinians for generations 

At least 14,500 Palestinian children have been killed by Israel since October 7.  However, Israel’s abuses against Palestinian children before this war had already been thoroughly documented. Journalist Chris Hedges detailed violence by Israelis against Palestinian children in Gaza in his 2002 book War is a force that gives us meaning:

Children have been shot in other conflicts I have covered […] but I have never before watched soldiers entice children like mice into a trap and murder them for sport. […] ‘We all threw rocks,’ said ten-year-old Ahmed Moharb. ‘Over the loudspeaker the soldier told us to come to the fence to get chocolate and money. Then they cursed us. Then they fired a grenade. We started to run. They shot Ali in the back. I won’t go again. I am afraid.’

Palestinian scholar Nadera Shalhoub-Kevorkian — whose work focuses on trauma, state crimes and criminology, surveillance, gender violence, law and society and genocide studies — first coined the term “unchilding” in 2019, to critically examine the use of Palestinian children as leverage for political goals.  

Middle East Monitor reported that from 2000–2020, “3,000 children have been killed by Israeli occupation forces. Some were killed in front of the lenses of international media, including 11 year-old Muhammad Al-Durrah.” In 2021, Defence for Children International also highlighted Israel’s targeting of Palestinian children and Human Rights Watch noted a spike in Palestinian children killed by Israelis in the West Bank in August 2023.

Save the Children reported in 2020, 2022, and mid-2023 on Israel’s systematic punitive abuses and in-custody traumatization of Palestinian children, including strip searching. They stated that “the most common charge brought against children is stone throwing, for which the maximum sentence is 20 years.” 

Defense for Children International found that the majority of children prosecuted from 2013 to 2018 experienced abuse by Israelis while in custody. Ahmad Manasra became well known for spending his entire teenage years in prison, including two years in solitary confinement, leading to severe psychological deterioration. According to The Guardian, Israel’s mass incarceration of Palestinian children represents “a hidden universe of suffering that touched nearly every Palestinian home.”

Caption: Sign from a peaceful pro-Palestinian demonstration in Berlin on December 2, 2023. Photo by the author, used with permission.

News media’s role in furthering the denial of Palestinian childhood

Two articles by The Guardian’s Jason Burke, published on November 22 and 23, illustrate the denial of Palestinian childhood portrayed across news media. Burke noted in both articles, “the [Israeli] hostages to be freed are women and children, and the Palestinian prisoners are also women and people aged 18 and younger.”

The use of divergent language within the same article to refer to children parallels the die” versus “kill” hierarchy, which is used to downplay Palestinian versus Israeli fatalities in news media.

The Guardian articles followed an intense period marked by derogatory racist comments, including Israeli Prime Minister Benjamin Netanyahu’s remarks in October, where he called  Palestinians the “children of darkness” and “human animals.” 

The Guardian is not the only news agency to employ divergent, vague or otherwise imprecise language when referencing Palestinian children and babies. The Associated Press has referred to Palestinian children as “minors,” Sky News has described a 4 year-old as a “young lady,” and The Washington Post has used the term “fragile lives” instead of saying “premature babies.” Scanning the archived New York Times top headlines daily from November 22 to December 3 reveals barely a hint of Palestinian victims, certainly not reflecting the mass number of child fatalities that occurred during that period.

After publication, The Guardian amended both of the aforementioned articles to refer to Palestinians under 18 as “children.” In a note at the bottom of the articles to explain the change, they wrote, “Any insensitivity in the earlier expression was unintentional.” 

Queer Jewish influencer Matt Bernstein (mattxiv) stated on Instagram: “When we allow ourselves to view Palestinians as anything less than full human beings […] we become complicit in our own moral bankruptcy.”

The language used in news reporting is crucial to communicating key details to readers. A 2016 Columbia University study found that 59 percent of  shared links “went unclicked, and presumably unread,” underscoring the significance of news headlines in delivering information and influencing audiences. The words used in social media previews — such as the title and tagline — are critical for those who don’t read past the headlines to grasp the extent of the situation. 

Sign from a peaceful pro-Palestinian demonstration in Berlin on November 4, 2023. Quote is from Save the Children. Photo by the author. Used with permission.

Racialized children at high risk

The denial of childhood is not exclusive to Palestinians, and  valuable insights can be gained by examining other racialized groups also subjected to significant violence. 

In the United States, Black children are six times more likely than white children to be shot and killed by police. High-profile cases like the murders of 17 year-old Trayvon Martin, 16 year-old Ma'Khia Bryant, and 12 year-old Tamir Rice illustrate the excessive risk Black children face in their daily lives. 

Researcher Alisha Nguyen explains:

To justify dehumanizing treatment against Black children, White logic affirms that Black children are less innocent and therefore, should receive less protection and do not deserve the same level of tolerance compared to White children.

Rice was later described by Cleveland Police Patrolmen's Association president Steve Loomis as “a 12-year-old in an adult body” as a means of justifying the excessive force used by the police officer who assassinated the sixth-grader.

Similar to the comments made by Loomis, there have been attempts to justify the murder of Palestinian children. Israeli Defense Minister Avigdor Lieberman stated in radio interviews and on X on November 30, “There are no innocents in Gaza.” President Isaac Herzog shared the same sentiment.

There are no innocents in Gaza.

As activist and educator Wagatwe Wajuki said on X:

If you wonder why Black people identify with the fight for Palestinian liberation: the white media’s refusal to see our children as children resonates. […] Under white supremacy, childhood is racialized because they associate childhood with innocence and only white children are deemed innocent.

Israeli journalist Gideon Levy wrote in Haaretz of the children killed by Israel:

No explanation, no justification or excuse could ever cover up this horror. It would be best if Israel's propaganda machine didn't even try to. […] Horror of this scope has no explanation other than the existence of an army and government lacking any boundaries set by law or morality.

]]>
‘I don’t feel safe': Reactions to Germany’s suppression of pro-Palestine solidarity https://globalvoices.org/2024/02/16/i-dont-feel-safe-reactions-to-germanys-suppression-of-pro-palestine-solidarity/ Fri, 16 Feb 2024 17:48:19 +0000 https://globalvoices.org/?p=806959 Arabs in Berlin say they feel scared, isolated, and confused by recent crackdowns

Originally published on Global Voices

 

‘Free Palestine will not be cancelled.’ Demonstration in Berlin, November 4, 2023, organized by Palestinian and Jewish groups. Photo by Streets of Berlin on Flickr.com (CC BY-SA 2.0)

In Berlin, Arabs are reaching out to one another, extending private sanctuaries to grieve over Gaza. While the stifling of pro-Palestinian sentiments isn’t new, recent crackdowns in Germany have struck a deeper chord. Israel's latest war on Gaza has killed at least 28,663 Palestinians and wounded 67,984 others so far. 

Since October 7, Germany has intensified its crackdown on Palestinian identity and solidarity, stifling peaceful expression. Protests have been banned or cancelled, and authorities target individuals displaying Palestinian symbols, like the flag and keffiyeh. Furthermore, legitimate resistance phrases like “Free Palestine” and “From the river to the sea” have been criminalized.

Global Voices interviewed six Arabs in Berlin who expressed concerns about the crackdown on dissent and the erasure of Palestinian identity in Germany. For safety, their identities are undisclosed, and pseudonyms are used.

An overwhelming sense of fragility

Fouad, a 30-year-old graphic designer and Iraqi refugee living in Germany for a decade, expressed disappointment with Berlin's lack of commitment to peace and freedom, stating:

“Though the crackdown isn’t new, this is the first time I’ve truly felt like a refugee in Berlin, which I used to see as a sanctuary. Germany is once again failing to stand on the side that calls for peace and freedom. It pains me that my tax money aids oppression. There is a negative energy everywhere. I feel watched, and I'm on high alert — I don't feel safe.”

Fouad is not alone. Hanan, a 25-year-old Egyptian who moved to Germany as a teenager, is dismayed that her eight years in Germany have been marked by insincerity: “I felt safer before. I’m not afraid to confront racist comments or stares. It's just that I used to feel safer in the company of genuine people who shared the same values and principles in life. Now, everything feels fake.”

Omniya, a 39-year-old, Egyptian-Polish woman born and raised in Germany, echoes their sentiments, feeling an ever-present risk of displacement. She expresses a similar longing for safety: “When I was a kid, one of my parents was deported. Now, politicians are discussing  mass deportations. The right-wing party, AfD, is gaining power in parliament and garnering support, aiming to transform Germany into an ethno-state.

If AfD succeeds in implementing its agenda—along with other parties leaning in that direction— it could have severe consequences. I wouldn’t say I felt ‘safe’ before, but I felt ‘safer.’ Now, I feel very unsafe.”

Souad, a 36-year-old US-born Palestinian educator in Germany for over 10 years, shares feelings of confusion amidst the unpredictable regulations in the country. She pointed out:

“Germany changes the rules by the day, yet they are not officially documented anywhere. It is incredibly confusing. One day, saying “Free Palestine” is acceptable; the next, it's ‘verboten’ (forbidden). Another day, discussing “genocide” is fine; then suddenly, it's verboten. We will find ways to express ourselves, but we need clarity on what’s on their list. Why play mind games with us when our people are being slaughtered?”

Rania, a 40-year-old Palestinian-German humanitarian born and raised in Germany, questions the state of the political landscape in the country:

“It’s not fear that overwhelms me when I read the crazy headlines (labeling us as the new Islamists and advocating for deporting us), but sheer anger. The same anger I feel when the government I voted for abstains from the UN ceasefire referendum

My true concern: Is there still a political party that individuals who identify as Black, Indigenous, and People of Colour (BIPOC) can trust?”

Marwa, a 35-year-old Jordanian engineer on a three-month fellowship to Berlin, felt unsettled throughout her stay.

Sign from a peaceful pro-Palestinian demonstration in Berlin on October 20, 2023. Photo by the author, used with permission.

“I felt unsafe in Berlin since I arrived at the airport. I was the only passenger stopped by the police for inspection because of my hijab and Arab Muslim looks. 

My insecurity heightened since October 7, coinciding with the increased hate speech against Arabs and Muslims. Now, I feel compelled to justify why I am Muslim, why I am Arab, why I support Palestine.”

 

Echoes of post-9/11 Islamophobia and anti-Arab racism in Germany

Following Germany’s response to October 7, the interviewees drew parallels to their past negative experiences after 9/11. Their reflections underscore the enduring effects of discrimination, spanning different countries and contexts. They noted a reemergence of fear, grief, and Islamophobia in their lives. Souad pointed out: 

“At 13, living in the US during 9/11, I endured one of the toughest periods, as Arabs and Muslims were dehumanized. Hearing white people celebrate the deaths of Arab and Muslim civilians left lasting scars. It took years to overcome the pain, but these recent months in Germany have reopened that wound.”

Omniya echoed similar disbelief and distress, saying:

“My current experience with the crackdown feels retraumatizing, given that I was coming-of-age during 9/11. Back then, I couldn't believe how George Bush spoke about Arabs and Muslims and how everything led to the attack on Iraq. Now, witnessing the genocide in Gaza, I recognize parallels to the post-9/11 climate, evoking the same feelings.”

In contrast, Fouad shared his foreboding feelings about the future, explaining: 

“The moment I saw the news [on October 7], I immediately thought of the consequences and felt grief for what's to come. I contemplated how the West would interpret it. It brought back memories of 9/11 and the surge of Islamophobia that ensued.”

Friend’s reactions deeply hurt

The interviewees conveyed a profound isolation stemming from a lack of empathy from German society and feelings of betrayal in personal relationships and societal dynamics.

Hanan felt hurt after a former friend belittled her grief about Gaza, which she had shared on social media: 

“This is the hardest one so far. I think about the good moments we shared as friends, yet I now feel very distanced from everyone. The feeling of betrayal impacts my sense of safety more than the ‘unsafe’ and violent racist comments we were used to before October 7.” 

Omniya also feels isolated. She clarified:

“I feel extremely lonely all the time. German society not only fails to understand, but doesn't want to understand. Even close friends have no idea of the depth of my sadness. They don’t reach out, and if they do, it’s for other reasons. 

They are clueless about the impact on Arabs here. This isn't confined to my friends; I see it everywhere. As Arabs, we share a collective feeling, but white people seem oblivious to it. Currently, I feel I cannot trust anyone.”

Rania explains the careful attention she gives to details when discussing Palestine:

“I shared statistics on civilian casualties in Palestine from the last two decades. As I clicked ‘share,’ my heart pounded, and I closed my eyes. I'm aware that most of my followers don’t want to read this content. However, I know what I’m talking about. I know my math. When I mention that the media doesn't report on ‘our people,’ I’m addressing a specific issue.”

She said the Palestinians symbolize more than just the struggle for Palestine. “They represent the disregard for Arab lives that don’t matter, the BIPOC lives that don’t matter, and they are victims of colonialism.”

Four months since October 7, the situation for Palestinians in Gaza and the West Bank continues to deteriorate. Despite the International Court of Justice‘s order for Israel to prevent acts of genocide, attacks on displaced Palestinians, mostly children, persist. Oxfam has declared “a humanitarian crisis of unprecedented severity and scale” in Gaza, highlighting the severity of the situation.

Germany’s export of millions of euros worth of weapons to Israel contradicts its commitment to human rights. The recent affirmation of support for Israel by the German Federal Government ignores the plight of Arabs and Muslims within Germany. This inconsistency cannot go unchallenged, as lives continue to be disregarded in Gaza and beyond.

]]>
When Palestinians ‘die’ and Israelis get ‘killed’ in the same war https://globalvoices.org/2023/11/16/when-palestinians-die-and-israelis-get-killed-in-the-same-war/ Thu, 16 Nov 2023 15:43:53 +0000 https://globalvoices.org/?p=801244 Word choices in the Israel-Gaza conflict reflect systemic media bias

Originally published on Global Voices

Federal Building, San Francisco, October 20, 2023. Hundreds of people from many backgrounds came together outside Senator Nancy Pelosi's office, to paint a giant street mural. The message: BIDEN, PELOSI: DON'T AID AND ABET WAR CRIMES, and calling for a CEASEFIRE! With Jewish Voice for Peace Bay Area, The Peace Poets, Climate Justice Street Mural Arts Project. Photo by . Flickr. (CC BY-NC 2.0 DEED).

Amidst the chaos of Israel's war on Gaza, truth becomes a casualty in the battleground of information, entangled in a maze of misinformation and biased narratives, eclipsing the reality of the crisis unfolding in Gaza.

In news reporting, every semantic choice, nuanced omission, prioritization, and bias holds the power to shape how readers interpret and absorb information. Systemic issues and marginalized voices are obscured beyond headlines. Cognitive and algorithmic biases manipulate information access, notably in the “fog of war,” as seen in Gaza.

The complex information landscape is shaped by not only misinformation but also by the different narratives employing defamation and dehumanization, mirroring pattern in main stream media coverage of Palestinians and other Black, Indigenous, and People of Color.

Palestinians don’t just die, they get killed

The choice between “died” and “killed” in describing fatalities in Israel’s war on Gaza reflects a subtle yet impactful semantic difference because it shapes how information is perceived.

Merriem-Webster defines “to die” as an intransitive verb, implying an indirect action, potentially linking fatality to natural causes, like old age. Conversely, “to kill” is a transitive verb, suggesting a more direct action, often tied to an unnatural or violent manner of death, such as an airstrike, for instance.

In 2022, Laura Albast wrote in an opinion article in The Washington Post, “This is a pattern we have seen over and over again in media coverage of Palestine. Palestinians are not killed; we simply die.”

This sentiment was echoed recently by journalist Yara Eid when she responded to a Sky News presenter, “I think language is really important to use because, as a journalist, you have the moral responsibility to report on what is happening. Palestinians don’t just die, they get killed.”

Journalist Yara Eid, explains the importance of the use of language when talking about the Israeli war on Gaza. Screenshot from eid_yara Instagram video. Fair use.

News media actively make choices about using passive or active voice, demonstrating a hierarchy in terminology beyond the die/kill dichotomy. Examining language within a news piece exposes framing that reveals inherent bias or perspective.

In one particularly confusing example, a CNN news anchor ambiguously described Palestinian fatalities by saying: “One hospital in Gaza says it received 22 bodies during the intense overnight bombardment along with hundreds of people injured.” There was no further clarification provided about whether those bodies were deceased, who was responsible, and from whom they were received.

The New York Times headline on the November 5 Israeli airstrike hitting the Al Maghazi refugee camp used indirect language, stating, “Explosion Gazans Say Was Airstrike Leaves Many Casualties in Dense Neighborhood.” This phrasing, such as “leaves many casualties” and “dense neighborhood” instead of specifying “a refugee camp,” was ambiguous.

Furthermore, the language used casts doubt about information sources, stating “Gazans say,” without explicitly attributing the airstrikes to Israelis. In the context of Israel’s month-long bombardment on Gaza, such ambiguity seems unnecessary. Notably, this strike was one of three airstrikes hitting refugee camps in Gaza within a 26-hour window.

In a CBS News article, the authors used intense language to describe Hamas’ attack on Israelis as a “murderous rampage.” However, when referring to Palestinian fatalities over the first nine days of the war, they employed comparatively lighter terms like “killed” and “death toll.”

This created a notable hierarchy in the portrayal of violence, which may diminish the impact or severity of the suffering of Palestinians. This discrepancy in language can influence readers’ impressions and create an imbalance in how violence is perceived.

Revealing a systemic issue in newsrooms

Bristol Friends of Gaza protest on the front lawn of BBC Bristol's headquarters on Whiteladies Road about biased reporting of the 2014 Israel-Gaza war. Photo by Rwendland, July 23, 2014. Wikimedia Commons (CC BY-SA 4.0).

Revealing a systemic issue in newsrooms, this hierarchy of terms and narrative shaping is not unique to Palestinians. The U.S. news media has long faced criticism for racism, particularly in its coverage of police violence against Black Americans, exemplified in the murder of Breonna Taylor.

Author and editor Adeshina Emmanuel pointed out, “Newsrooms often fixate on the moment of death, leaning heavily on police narratives, and — as those narratives often do — assassinate the characters of police violence victims.” This implies a narrow focus on the immediate and often dramatic events rather than the broader context.

The media’s coverage of the war in Ukraine has also raised concerns about racism. Scholar H.A. Hellyer highlighted the racist language used by reporters, emphasizing the dehumanization of non-White populations and its impact on their right to live in dignity. Beyond overtly racist coverage, other major humanitarian catastrophes, such as the war in Sudan, receive minimal attention from mainstream media.

Political influence and pressures on newsrooms significantly influence media narrative-shaping. In May 2023, it is unsurprising that a majority of US journalists expressed concerns about press freedoms. These concerns are supported by instances where numerous journalists were dismissed for expressing pro-Palestinian remarks, a trend that has intensified in recent weeks.

Amid the ongoing Israeli war on Gaza, U.S. Secretary of State Tony Blinken requested the Qatari Prime Minister “to tone down Al Jazeera's rhetoric” regarding Israel’s action in Gaza. This sentiment was reflected in other newsrooms, as reported by The Intercept: “Leadership at Upday, a subsidiary of the Germany-based publishing giant Axel Springer, gave instructions to prioritize the Israeli perspective and minimize Palestinian civilian deaths in coverage, according to the employees.”

A group of Jewish writers drafted an open letter condemning the notion that criticism of Israel is inherently anti-semitic and noted the pro-Palestine suppressions: 

“Now, this insidious gagging of free speech is being used to justify Israel’s ongoing military bombardment of Gaza and to silence criticism from the international community. […] Israeli journalists fear consequences for criticizing their government. […] We refuse the false choice between Jewish safety and Palestinian freedom; between Jewish identity and ending the oppression of Palestinians. In fact, we believe the rights of Jews and Palestinians go hand-in-hand.”

Global calls of solidarity

People in their tens of thousands rally in Melbourne, Australia, in support of Palestine and in solidarity with the Palestinian people. October 15, 2023. Photo by Matt Hrkac, Wikimedia Commons (CC BY-SA 3.0).

Despite the biased coverage by mainstream news media, the public has become aware of the genocide faced by civilians in Gaza, largely due to on-the-ground journalists providing coverage in English on social media platforms. Journalists such as Motaz Azaiza, Plestia Alaqad, and Bisan Owda, to name a few, have played a significant role in disseminating information.

Since Israel’s war on Gaza began, hundreds of thousands of protestors across major cities, including LondonWashington DC, São Paulo, Cape Town, and Kuala Lumpur, have regularly voiced solidarity with Palestinians. They have stepped in to address the failure of mainstream news media to raise awareness about Israel’s war crimes and disproportionate attacks on Palestinians.

These demonstrations align with a growing rift between the Global south and the West, exemplified by a chorus of accusations of hypocrisy from the Global South directed at the West. The criticism underscores contrasting policy and media response, highlighting the West's condemnation of an illegal occupation in Ukraine while staunchly supporting Israel's occupation of the West Bank and Gaza Strip.

As awareness of media biases grows, people around the world are driven to scrutinize information, demanding a more equitable representation of diverse perspectives. This collective effort signifies a pivotal shift where an informed public actively challenges biases, fostering a space where truth prevails, and marginalized voices resonate.

]]>
Deconstructing the ‘beheaded babies’ misinformation in Israel’s war on Gaza https://globalvoices.org/2023/10/27/deconstructing-the-beheaded-babies-misinformation/ Fri, 27 Oct 2023 14:27:04 +0000 https://globalvoices.org/?p=799689 How the consequences of misinformation contribute to real-life violence against Palestinians

Originally published on Global Voices

 

Smoke and flames billow after Israeli forces struck a high-rise tower in Gaza City, October 2023. Photo by Ali Hamad via Palestinian News & Information Agency (Wafa) in contract with APAimages. Wikimedia Commons. CC BY-SA 3.0.

In an era where news media wields significant influence in democratic societies, the alarming consequences of misinformation are evident, particularly in the context of recent events in Gaza. This misinformation profoundly impacts global public opinion and action, igniting extreme harm and violence against the people of Gaza.

In 2021, a Pew Research Center report noted an increase in perceived media influence and trust compared to the previous year. This is further underscored by a 2015 paper in the American Journal of Political Science, which examined the intricate link between news and politics, highlighting the media's role in shaping political decision-making and its impact on the democratic process.

When trusted media outlets, relied upon by billions for verified, fact-checked information, perpetuate unsubstantiated claims, two scenarios emerge. In one, readers continue to internalize false information even after corrections. Alternatively, eroding public trust in the news may drive individuals to seek alternative sources, fostering confusion and dubious claims.

This scenario is unfolding in the media coverage of Gaza following the Hamas attack on Israel on October 7, which resulted in the killing of 1,400 Israelis, injuring over 5,400, and the taking of 220 hostages. Subsequently, Israel declared war, leading to the flattening of parts of Gaza, the killing of over 7,000 Palestinians, and injuries to over 16,000 others. This escalation of violence also affected Palestinians in the West Bank and Israel, leading to an additional 1,100 Palestinian fatalities.

During times of conflict and unrest, a rapid influx of information from various sources can become difficult to assess and verify. Additionally, disinformation may increase, posing unique threats to civilians and marginalized groups. In the evolving news about Palestine and Israel, many falsehoods have inundated information platforms, including claims about Iran's black flag, Turkey's intervention, Ukraine's weaponry support, the Israeli general, Egyptian parachute jumpers, and Putin's warning, among others.

The case of “decapitated babies” 

One of the most egregious cases of misinformation since October 7 concerns false information about beheaded babies. Misinformation about beheaded babies began with Nicole Zedek, a correspondent for the privately-owned Israeli i24News, who reported “that Israeli soldiers told her they’d found babies, their heads cut off.” However, Israeli Forces spokesperson, Doron Spielman could not confirm this report, as per NBC News.

While 28 Israeli children have been confirmed as casualties, there is no photographic or verifiably documented evidence supporting the allegation of decapitated babies.

The Daily Mail subsequently published an article that further perpetuated the rumor, but failed to exercise due diligence in terms of fact-checking or seeking verifiable sources. The claim was presented in the headline and reiterated throughout the piece, lacking supporting evidence or eyewitness statements.

Between October 10 and October 11, the claim went viral across numerous major media outlets and news agencies worldwide, including BBC, CNN, CBS, Sky News, Metro News, Fox News, and Business Insider. Notably, it remained unsubstantiated, unverified, and unchecked during this time. 

Screenshot of CNN homepage on 11 October at 07:27, accessed via the Internet Archive. Fair use.

On October 11, during a White House press conference, President Biden perpetuated the claim, implying he had seen evidence, although he hadn’t. He stated: “I mean, I — I’ve been doing this a long time. I never really thought that I would see and have confirmed pictures of terrorists beheading children. I never thought I’d ever — anyway.”

Following the press conference, CNN issued a clarification, stating “After President Biden’s remarks earlier today, an administration official told CNN neither Biden nor the administration have seen pictures or confirmed reports of children or infants beheaded by Hamas.” 

The National Endowment for Democracy  reported in 2020 that information evoking emotions like fear, disgust, awe, and anger tends to go viral, even when entirely fabricated. Those spreading unsubstantiated claims about the 40 beheaded babies may have garnered attention, clicks, likes, and shares, but they also collectively disregarded journalistic due diligence by sharing heinous claims without verification. 

Sign at a peaceful pro-Palestinian protest in Berlin. October 21, 2023. Photo by Wael Eskandar. Used with permission.

The harm is done

The consequences of spreading such misinformation can be dangerous, as per the Intercept, “reports of Hamas crimes against civilians fueled rage among the public, elected officials, and policymakers.”

Marwa Fatafta, a regional policy manager for the nonprofit digital human rights group Access Now, told Washington Post that:

There’s a lot of information being shared that is not verified, a lot of calls to violence and dehumanization. And all this is fanning the flames for further massacres [of Palestinians].

One such case is the Illinois incident, where a 6-year-old Palestinian American boy, Wadea Al-Fayoume was fatally stabbed by his 75-year-old landlord, leaving his mother critically injured. The assailant shouted, “You Muslims have to die! You are killing our kids in Israel. You Palestinians don’t deserve to live.”  

The Lemkin Institute for Genocide Prevention pointed to the use of genocidal language in this case and held the Biden administration and US media accountable for “fanning the flames of anti-Palestinian and anti-Muslim sentiment in the USA during the first week of this horrific crisis in Israel/Palestine.”

This single incident exemplifies the real-life consequences of disseminating misinformation like the beheaded babies’ story. It has fueled islamophobia, dehumanized Palestinians, and exploited anti-Muslim tropes, particularly against Muslim men, while highlighting the dangerous conflation of Muslims with terrorists

When assessing the potential consequences of mis/disinformation using the Digital Enquirer Kit, a tool for investigators, it's practical to gauge the scale of harm from “low” to “high.” In this context, “low” pertains to misleading information that may be confusing, whereas “high” harm signifies “long-term, severe, or irreversible physical damage or psychological distress.” The latter is unfolding in Gaza due to misinformation.

Moreover, when false information is released into the public sphere, whether online or through media, the damage is often irreversible. Even if swift retractions or corrections are issued, they frequently garner less attention, and there is uncertainty about whether people will remember or internalize the accurate information. 

Basic cognitive psychology and the process of memory recall also play a role in this, as highlighted by First Draft News: 

There is consensus that once you’ve been exposed to misinformation it is very, very difficult to dislodge from your brain. Corrections often fail because the misinformation, even when explained in the context of a debunk, can later be recalled as a fact.

The situation in Gaza

According to statistics by the United Nations Office for the Coordination of Humanitarian Affairs (OCHA) on October 26, over just two weeks, 7,028 Palestinians were killed and 18,482 others were injured in Gaza. A staggering 68 percent of Palestinian casualties are children and women. Compounding this tragedy, more than 1,550 people in Gaza, of which 56 percent are children, are missing and feared to be trapped under the rubble. 

OCHA’s report also reveals that 45 percent of Gaza's housing units have been destroyed or damaged, with 1.4 million internally displaced people, highlighting the severity of the humanitarian crisis. Critical shortages of fuel, water, shelter, sanitation, and food add to the gravity of the dire situation. 

Alongside the devastation in Gaza, the West Bank has witnessed casualties, injuries, imprisonment, and the destruction of Palestinian property in the past two weeks, as reported by OCHA. Amnesty International documented Israeli war crimes, including the obliteration of entire families in Gaza. Moreover, Human Rights Watch confirmed Israel's use of white phosphorus in Gaza and Lebanon, a clear violation of international lawUN Chief Guterres condemned the collective punishment. 

In an interview with media critic Sana Saeed, conducted by The Intercept on October 11, addressing issues of misinformation and violence, she remarked: 

It’s been about four days since this incredible and tragic escalation of violence and the level of misinformation — even disinformation — seems near unprecedented […] We have seen journalists, in particular, spread unverified information that is being used to justify Israeli and even American calls and actions to annihilate an entire population.

 

]]>