Tactical Tech – Global Voices https://globalvoices.org Citizen media stories from around the world Sun, 09 Nov 2025 18:51:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Citizen media stories from around the world Tactical Tech – Global Voices false Tactical Tech – Global Voices webmaster@globalvoices.org Creative Commons Attribution, see our Attribution Policy for details. Creative Commons Attribution, see our Attribution Policy for details. podcast Citizen media stories from around the world Tactical Tech – Global Voices https://globalvoices.org/wp-content/uploads/2023/02/gv-podcast-logo-2022-icon-square-2400-GREEN.png https://globalvoices.org Do you follow?: How technology can exacerbate ‘information disorder’  https://globalvoices.org/2025/11/10/do-you-follow-how-technology-can-exacerbate-information-disorder/ Mon, 10 Nov 2025 09:30:34 +0000 https://globalvoices.org/?p=846070 ‘It is very, very difficult to dislodge [misinformation] from your brain’

Originally published on Global Voices

Two pink birds with strings of emails beneath them. Image by Liz Carrigan and Safa, with visual elements from Alessandro Cripsta, used with permission.

Image by Liz Carrigan and Safa, with visual elements from Alessandro Cripsta, used with permission.

This article was written by Safa for the series “Digitized Divides,originally published on tacticaltech.org. An edited version is republished by Global Voices under a partnership agreement.

Social media has been a key tool of information and connection for people who are part of traditionally marginalized communities. Young people access important communities they may not be able to access in real life, such as LGBTQ+ friendly spaces. In the words of one teen, “Throughout my entire life, I have been bullied relentlessly. However, when I’m online, I find that it is easier to make friends… […] Without it, I wouldn’t be here today.” But experts are saying that social media has been “both the best thing […] and it’s also the worst” to happen to the trans community, with hate speech and verbal abuse resulting in tragic real-life consequences. “Research to date suggests that social media experiences may be a double-edged sword for LGBTQ+ youth that can protect against or increase mental health and substance use risk.” 

In January 2025, Mark Zuckerberg announced that Meta (including Facebook and Instagram) would end their third-party fact-checking program in favor of the model of “community notes” on X (formerly Twitter). Meta’s decision included ending policies that protect LGBTQ+ users. Misinformation is an ongoing issue across social media platforms, reinforced and boosted by the design of the apps, with the most clicks and likes getting the most rewards, whether they be rewards of attention or money. Research found that “the 15% most habitual Facebook users were responsible for 37% of the false headlines shared in the study, suggesting that a relatively small number of people can have an outsized impact on the information ecosystem.”

Meta’s pledge to remove their third-party fact-checking program has raised alarm bells among journalists, human rights organizations, and researchers. The UN’s High Commissioner for Human Rights, Volker Türk, said in response: “Allowing hate speech and harmful content online has real world consequences.” Meta has been implicated in or accused of supercharging the genocide of the Rohingya in Myanmar, as well as fueling ethnic violence in Kenya, Ethiopia, and Nigeria, at least in part due to the rampant misinformation on its platform. 

“We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook … are affecting societies around the world,” said one leaked internal Facebook report from 2019. “We also have compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.” The International Fact-Checking Network responded to the end of the nine-year fact-checking program in an open letter shortly after Zuckerberg’s 2025 announcement, stating that “the decision to end Meta’s third-party fact-checking program is a step backward for those who want to see an internet that prioritizes accurate and trustworthy information.”

Unverifiable posts, disordered feeds

The algorithms behind social media platforms control which information is prioritized, repeated, and recommended to people in their feeds and search results. But even with several reports, studies, and shifting user behaviors, the companies themselves have not done much to adapt their user interface designs to catch up to the more modern ways of interaction and facilitate meaningful user fact-checking.

Even when media outlets publish corrections to false information and any unsubstantiated claims they perpetuate, it isn’t enough to reverse the damage. As described by First Draft News: “It is very, very difficult to dislodge [misinformation] from your brain.” When false information is published online or in the news and begins circulating, even if it is removed within minutes or hours, the “damage is done,” so to speak. Corrections and clarifying statements rarely get as much attention as the original piece of false information, and even if they are seen, they may not be internalized.  

Algorithms are so prevalent that, at first glance, they may seem trivial, but they are actually deeply significant. Well-known cases like the father who found out his daughter was pregnant through what was essentially an algorithm, and another father whose Facebook Year in Review “celebrated” the death of his daughter, illustrate how the creators, developers, and designers of algorithmically curated content should be considerate of worst-case scenarios. Edge cases, although rare, are significant and warrant inspection and mitigation. 

Furthering audiences down the rabbit hole, there have been a multitude of reports and studies that have found how recommendation algorithms across social media can radicalize audiences based on the content they prioritize and serve. “Moral outrage, specifically, is probably the most powerful form of content online.” A 2021 study found that TikTok’s algorithm led viewers from transphobic videos to violent far-right content, including racist, misogynistic, and ableist messaging. “Our research suggests that transphobia can be a gateway prejudice, leading to further far-right radicalization.” YouTube was also once dubbed the “radicalization engine,” and still seems to be struggling with its recommendation algorithms, such as the more recent report of YouTube Kids sending young viewers down eating disorder rabbit holes. Ahead of German elections in 2025, researchers found that social media feeds across platforms, but especially on TikTok, skewed right-wing. 

An erosion of credibility

People are increasingly looking for their information in different ways, beyond traditional news media outlets. A 2019 report found that teens were getting most of their news from social media. A 2022 article explained how many teens are using TikTok more than Google to find information. That same year, a study explored how adults under 30 trust information from social media almost as much as national news outlets. A 2023 multi-country report found that fewer than half (40 percent) of total respondents “trust news most of the time.” Researchers warned the trajectory of information disorder could result in governments steadily taking more control of information, adding “access to highly concentrated tech stacks will become an even more critical component of soft power for major powers to cement their influence.” 

Indonesia’s 2024 elections saw the use of AI-generated digital avatars take center stage, especially in capturing the attention of young voters. Former candidate and now President Prabowo Subianto used a cute digital avatar created by generative AI across social media platforms, including TikTok, and was able to completely rebrand his public image and win the presidency, distracting from accusations of major human rights abuses against him. Generative AI, including chatbots like ChatGPT, is also a key player in information disorder because of how realistic and convincing the texts and images it produces. 

Even seemingly harmless content on spam pages like “Shrimp Jesus” can result in real-world consequences, such as the erosion of trust, falling for scams, and having one’s data breached by brokers who feed that information back into systems, fueling digital influence. Furthermore, the outputs of generative AI may be highly controlled. “Automated systems have enabled governments to conduct more precise and subtle forms of online censorship,” according to a 2023 Freedom House report. “Purveyors of disinformation are employing AI-generated images, audio, and text, making the truth easier to distort and harder to discern.”

As has been echoed time and again throughout this series, technology is neither good nor bad — it depends on the purpose for which it is used. “Technology inherits the politics of its authors, but almost all technology can be harnessed in ways that transcend these frameworks.” These various use cases and comparisons can be useful when discussing specific tools and methods, but only at a superficial level — for instance, regarding digital avatars which were mentioned in this piece. 

One key example comes from Venezuela, where the media landscape is rife with AI-generated pro-government messages and people working in journalism face threats of imprisonment. In response, journalists have utilised digital avatars to help protect their identities and maintain privacy. This is, indeed, a story of resilience, but it sits within a larger and more nefarious context of power and punishment. While any individual tool can reveal both benefits and drawbacks in its use cases, zooming out and looking at the bigger picture reveals power systems and structures that put people at risk and the trade-offs of technology are simply not symmetrical. 

Two truths can exist at the same time, and the fact that technology is used for harnessing strength and is used for harming and oppressing people is significant.

]]>
Behind our screens: The truth about ‘artisanal’ mining and ‘natural’ technology https://globalvoices.org/2025/10/05/behind-our-screens-the-truth-about-artisanal-mining-and-natural-technology/ Sun, 05 Oct 2025 12:30:32 +0000 https://globalvoices.org/?p=844378 Revealing the true costs of cobalt extraction

Originally published on Global Voices

Image by Liz Carrigan and Safa, with visual elements from Yiorgos Bagakis, Alessandro Cripsta, and La Loma, used with permission.

This article was written by Safa for the series ‘Digitized Divides’ and originally published on tacticaltech.org. An edited version is republished by Global Voices under a partnership agreement.

When people talk about “natural” versus “artificial,” there is an assumption that technology sits on the artificial side, but the elements and materials it is made from come from the earth and are handled by many people. 

What really is “natural,” after all? “It is impossible to talk about a green energy transitioning world without these minerals,” said humanist, leader and speaker Kave Bulambo in a 2024 speech. “When you start to dig deep to try and understand this equation, you realize that under this shiny Big Tech movement lies a world of exploitation for men, women, and even children laboring in cobalt mines in the [Democratic Republic of] Congo.”

It would be disingenuous to attempt to disentangle the human rights abuses connected to creating technologies from their environmental impacts. Siddarth Kara, a researcher of modern-day slavery, discussed the environmental impacts of cobalt mining: “Millions of trees have been cut down, the air around mines is hazy with dust and grit, and the water has been contaminated with toxic effluents from the mining processing.” 

Cobalt and ‘green’ energy

Cobalt is a stone that has an almost eerie blue color — for centuries, it has been used in the arts. It has also become essential for the manufacturing of rechargeable batteries — like those that enable smartphones, laptops, electric cars, and more. Cobalt is just one of the natural resources powering the “green energy revolution.” But this important stone can be toxic to touch and breathe, especially in high doses. 

Large deposits of cobalt have been found in the DRC, accounting for over 70 percent of the world’s reserves. To understand the harmful effects of cobalt mining in the DRC, it is essential to consider its colonial history. Continued exploitation of the country’s resources persisted, even after it gained formal independence in 1960, leaving a legacy that continues to shape the country’s mining sector today. Kolwezi, a city in the DRC, was built by Belgium under an apartheid-style system of urban segregation, and now has many large open-pit mines situated in and around its periphery. 

Both multinational companies with concessions and artisanal miners are involved in cobalt mining in the DRC, though industrial mines now dominate the region. Artisanal and small-scale mining (ASM) remains widespread, with thousands of informal miners working in dangerous conditions to extract cobalt by hand. Kara described how so-called “artisanal miners” — including children — are digging for cobalt: “The bottom of the supply chain, where almost all the world's cobalt is coming from, is a horror show.” 

What comes to mind when you think of something “artisanal?” It is probably not informal workers digging in hazardous, often toxic conditions, either earning a subsistence income for their families or working in small groups to extract minerals for commercial sale. “Artisanal” has a meaning of small-scale and handmade, which is true in a sense for the work of “artisanal miners.” But the term ‘artisanal’ is evocative of a quaint neighborhood farmers’ market or traditional handmade cheese or soap — not of children and adults digging toxic stones from the ground with their bare hands at gunpoint. 

The term partly comes from its low-tech nature, as it involves individuals mining deposits that are either unprofitable, unsafe, or otherwise unsuitable for large-scale mining companies. Yet, artisanal mining is far from small-scale. Over 100 million people worldwide are engaged in or rely on the income it generates. While it may seem more wholesome than industrial mining, an industry with one of the worst track records for human rights abuses, artisanal mining often lacks environmental and worker safeguards, as well as protections for women’s and children’s rights. 

This form of mining is common in Kolwezi, especially in areas where people have been displaced by large-scale mining projects. Despite attempts to formalise the sector, informal mining persists, with reports of “Creuseurs” (translated as “diggers,” as they are known locally) continuing to dig under their homes or in newly “illegal sites outside the formal mine boundaries. As one miner, Edmond Kalenga, put it: “The minerals are like a snake moving through the village. You just follow the snake”.

‘Blood cobalt’

A 2022 Amnesty International report detailed several case studies of human rights abuses at three sites where they used documentary evidence, satellite images, and interviews with former residents to determine that people had been forcibly evicted from their homes, in the name of energy transition mining. Forced evictions constitute a fundamental breach of human rights, and lead to loss of livelihood, and the loss of other human rights such as access to basic services, including health and education. The forced evictions occurred as part of the government's efforts to formalize the mining sector, carried out in collaboration with mining companies. People living close to polluted mines are exposed to severe health risks. The DRC mining region is one of the 10 most polluted areas in the world. Research suggests a correlation between exposure to heavy metals such as cobalt and birth defects, and children have been found with a high concentration of cobalt in their urine.

In addition to the human rights violations already mentioned, the innumerable environmental and health costs are interconnected, with issues such as biodiversity loss, pollution (air, soil, water), and the socio-economic consequences of job insecurity, violence, and loss of livelihoods. These impacts also lead to further challenges, including displacement, gender-based violence, and the erosion of cultural knowledge. Diamonds are not the only conflict mineral; as you can see, cobalt is among the many minerals which are extracted through degrading means, with devastating results. 

Companies that make lithium batteries, such as Tesla, occasionally respond to public calls for supply chain transparency; however, as demand for cobalt grows, businesses involved in battery manufacturing must pay attention to ethical and human rights issues along the entire supply chain. Alphabet (Google’s parent company), Apple, Dell, Microsoft, and Tesla have all been accused of purchasing cobalt which was gathered by means of forced labor, and deliberately obscuring their dependence on child labor — including those in extreme poverty. 

While the US court found that companies purchasing from suppliers were not responsible for the practices of suppliers, further doubts have already been raised against Apple. “It is a major paradox of the digital era that some of the world’s richest, most innovative companies are able to market incredibly sophisticated devices without being required to show where they source raw materials for their components,” said Emmanuel Umpula, executive director of Afrewatch (Africa Resources Watch). 

The European Parliament has voted in a law that large companies are obliged to conduct human rights and environmental due diligence  — a step towards holding corporations accountable for rights violations of their suppliers. But supply chains themselves are not necessarily reliable narrators. In the case of cobalt, suppliers may put the child-labor cobalt together with the child-labor-free cobalt in refineries, making it difficult or even impossible to trace. Furthermore, child-labor free cobalt does not necessarily mean it is free of human exploitation and jarring conditions. For more in-depth information on due diligence and accountability in the DRC’s mining sector, the Carter Center highlights several key recommendations.

Our energy consumption will only continue to increase with developments like ChatGPT, cryptocurrencies, and faster internet. One researcher found that using generative AI to create one image uses as much energy as charging a smartphone. A report by Goldman Sachs, a multinational investment firm, found that one AI-powered search used 10 times more electricity than a regular search. Both Google and Microsoft have self-reported that their carbon emissions have grown as a result of AI. With water and food scarcity being real-world threats and an ever-warming climate, how long will the planet be able to sustain these systems? When we finally take a critical look at the nature that’s powering our screens, we may see its poisonous impacts on people and the planet.

]]>
Systematized supremacy: The consequences of blind faith in technology https://globalvoices.org/2025/09/22/systematized-supremacy-the-consequences-of-blind-faith-in-technology/ Mon, 22 Sep 2025 06:30:30 +0000 https://globalvoices.org/?p=843846 Technology itself isn’t good or bad; it is about the humans behind it

Originally published on Global Voices

Illustration of two digital faces on either side os a bank of technologies with people looking at it. Image by Liz Carrigan and Safa, with visual elements from Yiorgos Bagakis and La Loma, used with permission.

Image by Liz Carrigan and Safa, with visual elements from Yiorgos Bagakis and La Loma, used with permission.

This article was written by Safa for the series ‘Digitized Divides’ and originally published on tacticaltech.org. An edited version is republished by Global Voices under a partnership agreement.

Technology can be used to help people, to harm people, but it also isn’t necessarily an either/or situation —  it can be used simultaneously for the benefit of one person or group while harming another person or group. 

While some may ask whether the benefits of using personal data to implement widespread policies and actions outweigh the harms, comparing the benefits and harms in this balanced, binary, two-sided approach is a misguided way to assess it critically, especially when the harms include violence against civilians. After all, human suffering is never justified, and there are no ways to sugarcoat negative repercussions in good faith. Technological bothsidesism attempts to determine the “goodness” or “brownie points” of technology, which is a distraction, because technology itself isn’t good or bad — it is about the humans behind it, the owners and operators behind the machines. Depending on the intentions and aims of those people, technology can be used for a wide variety of purposes.  

Lucrative and lethal

Israel uses data collected from Palestinians to train AI-powered automated tools, including those co-produced by international firms, like the collaboration between Israel’s Elbit Systems and India’s Adani Defence and Aerospace, that have been deployed in Gaza and across the West Bank. Israeli AI-supercharged surveillance tools and spyware, including Pegasus, Paragon, QuaDream, Candiru, Cellebrite, as well as AI weaponry, including the Smart Shooter and Lavender, are world-famous and exported to many places, including South Sudan and the United States

The US is also looking into ways to use home-made and imported facial recognition technologies at the US–Mexico border to track the identities of migrant children, collecting data they can use over time. Eileen Guo of MIT Technology Review wrote: “That this technology would target people who are offered fewer privacy protections than would be afforded to US citizens is just part of the wider trend of using people from the developing world, whether they are migrants coming to the border or civilians in war zones, to help improve new technologies.” In addition to facial recognition, the United States is also collecting DNA samples of immigrants for a mass registry with the FBI.

In 2021, US-headquartered companies Google and Amazon jointly signed an exclusive billion-dollar contract with the Israeli government to develop “Project Nimbus,” which was meant to advance technologies in facial detection, automated image categorization, object tracking, and sentiment analysis for military use — a move that was condemned by hundreds of Google and Amazon employees in a coalition called No Tech for Apartheid

The Israeli army also has ties with Microsoft for machine learning tools and cloud storage. These examples are brought in here to show the imbalance of power within the greater systems of oppression at play. These tools and corporate ties are not accessible to all potential benefactors; it would be inconceivable for Google, Amazon, and Microsoft to sign these same contracts with, say, the Islamic Resistance Movement (Hamas).

‘Smart’ weapons, nightmare fuel

Former US President Barack Obama is credited with normalizing the use of armed drones in non-battlefield settings. The Obama administration described drone strikes as “surgical” and “precise,” at times even claiming that the use of armed drones resulted in not “a single collateral death,” when that was patently false. Since Obama took office in 2009, drone strikes became commonplace and even expanded in US international actions (in battlefield and non-battlefield settings) of the subsequent administrations. 

Critics say the use of drones in warfare gives governments the power to “act as judge, jury, and executioner from thousands of miles away” and that civilians “disproportionately suffer” in “an urgent threat to the right to life.”  In one example, the BBC described Russian drones as “hunting” Ukrainian civilians. 

In 2009, Human Rights Watch reported on Israel’s use of armed drones in Gaza. In 2021, Israel started deploying “drone swarms” in Gaza to locate and monitor targets. In 2022, Omri Dor, commander of Palmachim Airbase, said, “The whole of Gaza is ‘covered’ with UAVs that collect intelligence 24 hours a day.” In Gaza, drone technology has played a major role in increasing damage and targets, including hybrid drones such as “The Rooster” and “Robodogs” that can fly, hover, roll, and climb uneven terrain. Machine gun rovers have been used to replace on-the-ground troops. 

The AI-powered Smart Shooter, whose slogan is “one-shot, one-hit,” boasts a high degree of accuracy. The Smart Shooter was installed during its pilot stage in 2022 at a Hebron checkpoint, where it remains active to this day. Israel also employs “smart” missiles, like the SPICE 2000, which was used in October 2024 to bomb a Beirut high-rise apartment building

The Israeli military is considered to be one of the top 20 most powerful military forces in the world. Israel claimed that it conducts “precision strikes” and does not target civilians, but civilian harm expert Larry Lewis has said Israel’s civilian harm mitigation strategies have been insufficient, with their campaigns seemingly designed to create risk to civilians. The aforementioned technologies employed by Israel have helped their military use disproportionate force to kill Palestinians in Gaza en masse. As an IDF spokesperson described, “We’re focused on what causes maximum damage.” 

While AI-powered technologies reduce boots on the ground and, therefore, potential injuries and casualties of the military who deploy them, they greatly increase casualties of those being targeted. The Israeli military claims AI-powered systems “have minimized collateral damage and raised the accuracy of the human-led process,” but the documented results tell a different story. 

Documentation reveals that at least 13,319 of the Palestinians who were killed were babies and children between 0 and 12 years of age. The UN’s reports of Palestinian casualties are said to be conservative by researchers, who estimate the true death toll to be double or even more than triple. According to one report: “So-called ‘smart systems’ may determine the target, but the bombing is carried out with unguided and imprecise ‘dumb’ ammunition because the army doesn’t want to use expensive bombs on what one intelligence officer described as ‘garbage targets.’” Furthermore, 92 percent of housing units were destroyed in Gaza, as well as 88 percent of school buildings, and 69 percent of overall structures across Gaza have been destroyed or damaged

In 2024, UN experts deplored Israel’s use of AI to commit crimes against humanity in Gaza. Regardless of all the aforementioned information, that same year, Israel signed a global treaty on AI developed by the Council of Europe for safeguarding human rights. Seeing how Israel has killed such a large number of Palestinians using AI-powered tools, and connected to technologies which are used in daily life, such as WhatsApp, is seen by some as a warning sign of what is possible to befall them one day, but is seen by others as a blueprint for efficiently systematizing supremacy and control. 

This piece positions that it isn’t just about the lack of human oversight with data and AI tools that is the issue; actually, who collects, owns, controls, and interprets the data and what their biases are (whether implicit or explicit) is a key part in understanding the actual and potential for harm and abuse. Furthermore, focusing exclusively on technology in Israel’s committing of genocide in Gaza, or any war for that matter, could risk a major mistake: absolving the perpetrators’ responsibility for crimes they commit using technology. When over-emphasizing the tools, it can become all too easy to redefine intentional abuses as machine-made mistakes. 

When looking at technology’s use in geopolitics and warfare, understanding the power structures is key to gaining a clear overview. Finding the “goodness” in ultra-specific uses of technology does little in the attempt to offset the “bad.”

For the human beings whose lives have been made more challenging and conditions dire as a result of the implementation of technology in domination, warfare, and systems of supremacy, there is not much that can be rationalized for the better. And the same can be said of other entities that use advantages (geopolitical, technological, or otherwise) in order to assert control over others who are in relatively more disadvantaged and vulnerable positions. To divorce the helpful and harmful applications of technology is to lose oversight of the bigger picture of not only how tech could be used one day, but how it is actually being used right now.

]]>
Normalizing surveillance in daily life https://globalvoices.org/2025/09/15/normalizing-surveillance-in-daily-life/ Mon, 15 Sep 2025 01:30:52 +0000 https://globalvoices.org/?p=843591 How technology is used to supercharge monitoring and control

Originally published on Global Voices

 

Grid in the sky above a silhouette of mountains. Image by Safa and Liz Carrigan, with visual elements from Yiorgos Bagakis, Alessandro Cripsta, and La Loma, used with permission.

Image by Safa and Liz Carrigan, with visual elements from Yiorgos Bagakis, Alessandro Cripsta, and La Loma, used with permission.

This article was written by Safa for the series “Digitized Dividesand originally published on tacticaltech.org. An edited version is republished by Global Voices under a partnership agreement. 

Surveillance, monitoring, and control have been used historically and continue to be used currently under the guise of protection and security, but, as professor Hannah Zeavin explained, “[C]are is a mode that accommodates and justifies surveillance as a practice, framing it as an ethical ‘good’ or security necessity instead of a political choice.”

Tactical Tech is based in Berlin, the former capital city of international espionage. The Ministry for State Security (also referred to as the Stasi) was the state security and secret police of the former East Germany (German Democratic Republic or GDR) from 1950 until 1990. They are known as one of the most repressive police organizations to have existed. Upon the dissolution of the Stasi, thousands of protesters occupied their Berlin headquarters and prevented them from destroying their records. What survives includes nearly two million photos and so many files that if they were laid out flat, they would be more than 111 kilometers (70 miles) long. 

The Stasi also conducted international operations that had lasting effects abroad. They extensively trained the former Syrian Mukhabarat (secret police) of the now fallen Assad regime, under Hafez al-Assad,[in] methods of interrogation, infiltration, disinformation and brutal extraction of confessions were meticulously hammered into the minds of Syrian intelligence officials by senior Stasi agents.” With the fall of the GDR and the Berlin Wall, the Stasi was dissolved and East and West Germany reunified. 

While Germany has taken some steps to reckon with its past, surveillance is still ever-present. German states have been using Palantir software to support population surveillance efforts since 2017. In 2021, Human Rights Watch raised concerns over two laws that were amended, which granted more surveillance powers to the federal police and intelligence services. While Germans have experienced a long and persistent history of surveillance and have gained a reputation for taking privacy issues very seriously, this perspective has changed over time. A 2017 study that surveyed over 5,000 Germans on various privacy-related topics found that “Germans consider privacy to be valuable, but at the same time, almost half of the population thinks that it is becoming less and less important in our society.” 

Although the Stasi are world-famous for their surveillance and data collection, today’s law enforcement landscape is a smorgasbord of data. The Stasi versus NSA visualization, developed in 2013, shows how data collected from the two entities compares, projecting that “the NSA can store almost 1 billion times more data than the Stasi.” Using modern technologies like algorithms and access to all of the digitized data from health conditions to search queries and private chats, it is easier than ever to get not just a glimpse but a full picture into the lives of nearly anyone. 

As Amnesty International reported, “[T]he Stasi archive is a timely warning of the potential consequences of unchecked surveillance. It shows how quickly a system for identifying threats evolves into a desire to know everything about everyone.” Tactical Tech’s project “The Glass Room” has explored this topic through the years, describing: “There is a growing market for technologies that promise increased control, security, and protection from harm. At the same time, they can normalize surveillance at a macro and micro level— from the shape of a child’s ear to satellite images of acres of farmland. Often, those who need the most support may have the least control over how or when their data is being used.” 

The Glass Room’s “Big Mother exhibit adapts the Big Brother imagery to a more nurturing figure — a mother — exemplifying how people can easily let their guards down when data tracking is framed as helpful and caring. This can be seen in the advertisements for tech products such as devices that help people monitor elderly relatives via an app, fertility tracking apps, and refugee and asylum-seeker biometrics registries. The US and Israel are among the world’s biggest suppliers of surveillance tech, including the US-based Palantir and Israel’s NSO group and Elbit Systems, used by governments in places like the US-Mexico border, Central America, and Europe

Monitoring minors

The so-called ed-tech industry has been gaining traction for years, even before the COVID-19 pandemic. “Ed-tech” describes the numerous technological innovations that are marketed to schools that are purported to benefit students, teachers, and school administrators. Not all ed-tech is the same, and there are efforts to bring digitization to schools to reduce the digital divide, especially experienced in more rural and low-income areas. With that said, some of the digital tools used by school administrators have the potential to act equally as tools of surveillance. These include recording children at daycare, using AI to analyze body and eye movements during exams, and monitoring student social media

So much monitoring is not without consequence, especially for traditionally marginalized groups. One study reported that student surveillance technologies put Black, Indigenous, Latine/x, LGBTQ+, undocumented, and low-income students, as well as students with disabilities, at higher risk. In 2023, the American Civil Liberties Union (ACLU) interviewed teens aged 14–18 to capture the experiences of surveillance in schools. One participant reflected: “…[W]e treat kids like monsters and like criminals, then … it’s kinda like a self-fulfilling prophecy.” In 2017, the Electronic Frontier Foundation warned: “Ed tech unchecked threatens to normalize the next generation to a digital world in which users hand over data without question in return for free services, a world that is less private not just by default, but by design.” Some students and parents have pushed back, and in some cases successfully blocked certain technologies from being used in schools.

Eyes everywhere

Workers are also feeling watched. From 2020 to 2022, the number of large employers who used employee monitoring tools doubled. And it isn’t only the well-known control mechanisms Amazon uses on their warehouse workers — the average office worker may also be affected. A 2023 study of 2,000 employers found that over three-quarters of them were using some form of remote work surveillance on their workers. Employers are keeping track of their employees using methods such as internet monitoring, fingerprint scanners, eye movement tracking, social media scraping, and voice analysis, among others. “We are in the midst of a shift in work and workplace relationships as significant as the Second Industrial Revolution of the late 19th and early 20th centuries,” according to the MIT Technology Review. “And new policies and protections may be necessary to correct the balance of power.”

Even cars can be turned into tools of surveillance. Getting to work and dropping off the kids at school may be taking place in a data mining automobile. In 2023, 84 percent of car brands were found to sell or share personal data with data brokers and businesses. That same year, news broke that Tesla employees had been sharing among themselves in chat rooms private camera recordings captured in customers’ cars. This didn’t happen only once or twice but many times from 2019 to 2022. The videos included nudity, crashes, and road-rage incidents; some were even “made into memes by embellishing them with amusing captions or commentary, before posting them in private group chats.” In 2024, Volkswagen was responsible for a data breach that left the precise location of hundreds of thousands of vehicles across Europe exposed online for months. In the US, researchers found that some license plate reader cameras were live-streaming video and car data online. 

In early 2025, Tesla executives handed over dashcam footage to Las Vegas police to help find the person responsible (who used ChatGPT to plan the attack) for the Tesla Cybertruck that exploded outside the Trump International Hotel. While this particular case and the actions of Tesla executives were generally applauded in the media, it does raise questions about the broader issue of surveillance, the application of the law, and the limits of privacy.

Researchers noted about data tracking more broadly that “tactics and tools already used by law enforcement and immigration authorities could be adapted to track anyone seeking or even considering an abortion.” Finding more ways to document and track people can also translate into ever more menacing ways under different political administrations and in contexts that have even fewer protections for marginalized groups. 

]]>
‘Smart’ (or Machiavellian?) surveillance: The power of terminology https://globalvoices.org/2025/09/03/smart-or-machiavellian-surveillance/ Wed, 03 Sep 2025 06:30:10 +0000 https://globalvoices.org/?p=842852 Technology is used to supercharge weapons of mass oppression

Originally published on Global Voices

Yellow background and outline of a face, signifying facial recognition technology. Image by Safa, with visual elements from La Loma, used with permission.

Image by Safa, with visual elements from La Loma, used with permission.

This article was written by Safa for the seriesDigitized Divides originally published on tacticaltech.org. An edited version is republished by Global Voices under a partnership agreement. 

The terms that are used to describe technology can shape how we think about it. The word “smart” has a positive connotation, in most cases, but when it comes to technology, “smart” is usually used interchangeably with “efficient.” Imagine if instead of calling systems of surveillance “smart,” we called them “Machiavellian” — how might that change our discourse, acceptance, and adoption of them? 

Unreliable systems

Tools of monitoring and control, such as CCTV, rely on facial recognition technology, which automatically identifies unique facial data, including measurements like the distance between the eyes, width of the nose, depth of eye sockets, shape of cheekbone, and length of jawline. Facial recognition is used by governments, police, and other agencies around the world, with significant results. 

One unprecedented operation by US law enforcement resulted in hundreds of children and their abusers being identified in just three weeks. This technology has also been used to find missing and murdered Indigenous people (MMIP), helping 57 families find answers in just three years. While these results are indeed remarkable and reveal the ways in which the application of technologies can be used to help people, there have also been numerous cases of facial recognition being used by US law enforcement in ways that have harmed people. 

An app called CBP One, which is required by asylum seekers at the US-Mexico border, includes a requirement for people to register themselves in a facial recognition system. But, that system “[fails] to register many people with darker skin tones, effectively barring them from their right to request entry into the US.” The systems centralizing data of asylum-seekers and migrants make longitudinal tracking of children possible. Facial recognition technologies are also used by ICE (the US’s Immigration and Customs Enforcement agency) to monitor and surveil people awaiting deportation hearings. 

In one study on facial recognition systems, MIT researcher Joy Buolamwini found that “darker-skinned females are the most misclassified group (with error rates of up to 34.7 percent). The maximum error rate for lighter-skinned males is 0.8 percent.” Harvard researcher, Alex Najibi, described how “Black Americans are more likely to be arrested and incarcerated for minor crimes than White Americans. Consequently, Black people are overrepresented in mugshot data, which face recognition uses to make predictions, explaining how Black Americans are more likely than White Americans to become trapped in cycles and systems of racist policing and surveillance. 

This sentiment is echoed in a report by the project S.T.O.P. — The Surveillance Technology Oversight Project. The UK and China are also among the countries that practice “predictive policing.” One researcher focusing on China describes it as “a more refined tool for the selective suppression of already targeted groups by the police and does not substantially reduce crime or increase overall security.” So the issue here is not simply about flawed datasets; it is discrimination that already exists in society, where people who hold positions of power or have police or military force can use technology to enhance their oppression of particular groups of people. Larger datasets will not remedy or negate the problem of people acting upon discrimination, racism, or other types of bias and hatred. 

Algorithms are created by people (who inherently have their own biases) and are developed using our data. The tools trained on our data can be used to harm other people. Algorithms are also used by governments, law enforcement, and other agencies worldwide. Tools and services from Google, Amazon, and Microsoft have all been used by Israel in its war on Gaza. In the United States, algorithms have been used to score risk levels for individuals who have committed crimes, assessing their likelihood of committing future crimes. But these algorithms have been found by researchers to be “remarkably unreliable” and include a significant amount of bias in their design and implementation.

In Spain, an algorithm was used to predict how likely a domestic abuse survivor would be to be abused again, with the intention to distribute support and resources to people who need it most urgently, in an overburdened system. But the algorithm isn’t perfect, and over-reliance on such flawed tools in high-stakes situations has had dire consequences. In some cases, survivors mislabelled as “low risk” have been murdered by their abusers despite their best efforts to seek help and report the abuse to authorities.

In the Netherlands, tax authorities used an algorithm to help them identify child care benefits fraud, with tens of thousands of lower-income families being penalized, resulting in many falling into poverty and even more than a thousand children being wrongfully put into foster care. “Having dual nationality was marked as a big risk indicator, as was a low income [… and] having Turkish or Moroccan nationality was a particular focus.” 

Israel surveils and oppresses Palestinians

Israel’s surveillance industry is world famous. A 2023 report by Amnesty International mapped the visible Israeli surveillance system and found one or two CCTV cameras every five meters in Jerusalem’s Old City and Sheikh Jarrah in East Jerusalem. 

Since 2020, Israel’s military-run “Wolf Pack” has been in use; this is a vast and detailed database profiling virtually all Palestinians in the West Bank, including their photographs, family connections, education, and more. The Wolf Pack includes “Red Wolf,” “White Wolf,” and “Blue Wolf” tools: 

  • Red Wolf: The Red Wolf system is part of the Israeli government’s official CCTV facial recognition infrastructure to identify and profile Palestinians as they pass through checkpoints and move through cities. It has been reported that Israel’s military uses Red Wolf in the Palestinian city of Hebron. According to a project by B’Tselem and Breaking the Silence, the Israeli military has set up 86 checkpoints and barriers across 20 percent of Hebron, referred to as “H2,” that is under Israeli military control. The checkpoints are hard to avoid in H2. As Masha Gessen wrote, Palestinians living there “go through a checkpoint in order to buy groceries and again to bring them home.” According to UNRWA, 88 percent of children cross checkpoints on their way to and from school.
  • White Wolf: Another app, called White Wolf, is available to official Israeli military personnel who are guarding illegal settlements in the West Bank, which allows them to search the database of Palestinians. Since Israel’s war on Gaza began after the October 7, 2023, attacks by the Islamic Resistance Movement (aka Hamas) on Israelis, Israel has rolled out a similar facial recognition system registry of Palestinians in Gaza. 
  • Blue Wolf: Using the app called Blue Wolf, the Israeli military has been carrying out a massive biometric registry of Palestinians, often at checkpoints and by gunpoint, sometimes at people’s private homes in the middle of the night. Israeli soldiers take pictures of Palestinians, including children, sometimes by force. Israeli soldiers also note within the app any “negative impressions [they] have of a Palestinian’s conduct when encountering them.” One source added, “It’s not that the military has said, let’s make the Blue Wolf so [the Palestinians] can pass through more easily. The military wants to enter the people into its system for control.”

A 2025 article also revealed how the Israeli military was using a large language model (such as that used by tools like ChatGPT) to surveil Palestinians. One Israeli intelligence source stated, “I have more tools to know what every person in the West Bank is doing. When you hold so much data, you can direct it toward any purpose you choose.” While the Israeli military is not the only government-sanctioned example of training AI tools on civilian data, it offers an important insight into how the latest technologies can be adopted for widespread monitoring and control.

As researcher Carlos Delclós said, “Privacy is not merely invaded; it is obliterated, as human lives are fragmented into datasets optimised for corporate gain,” and the same message can be extended to political gain. Regardless of whether we call technology by positive or negative terms, at the end of the day, the technology itself cannot be separated from the operators (i.e. humans) who deploy it. If the people who use these technologies are also inhabiting societies and working within systems that have documented concerns of discrimination and/or control, it seems quite possible that the tech will be used to cause harm. We don’t even need to imagine it. We can simply look around with both eyes open.

]]>
AI’s bitter truth: It has biases, too https://globalvoices.org/2025/04/03/ais-bitter-truth-it-has-biases-too/ Thu, 03 Apr 2025 13:12:29 +0000 https://globalvoices.org/?p=831566 The systems behind artificial intelligence reveal deep-seated biases that can negatively influence society's perceptions and behavior

Originally published on Global Voices

Illustration by Tactical Tech, with visual elements from Yiorgos Bagakis and Alessandro Cripsta. Used with permission.

This article was written by Safa Ghnaim in collaboration with Goethe-Institut Brazil and originally published on DataDetoxKit.org. An edited version is republished by Global Voices under a partnership agreement. 

Though it may seem like a “neutral technology,” artificial intelligence (AI) has biases, too — it is not the objective tool people think it is. AI is designed by people and trained on data sets. Just like you, the people who build it have certain beliefs, opinions and experiences that inform their choices, whether they realize it or not. The engineers and companies that build and train AI may think certain information or goals are more important than others. Depending on which data sets they “feed” to the AI tools they build — like algorithms or chatbots — those machines might serve up biased results. That’s why AI can produce inaccurate data, generate false assumptions, or make the same bad decisions as a person.

AI is not magic: machines programmed by people carry their flaws

Some people talk about AI as if it’s magic, but “artificial intelligence” is just a machine. Simply put, AI tools are computer programs that have been fed a lot of data to help them make predictions. “AI” refers to a variety of tools designed to recognize patterns, solve problems, and make decisions at a much greater speed and scale than humans can.

But like any tool, AI is designed and programmed by humans. The people who create these machines give them rules to follow: “Do this; but don’t do that.” Knowing that AI tools are automated systems with their own human-influenced limitations can give you more confidence to talk about the capabilities and drawbacks of AI.

When people talk about AI, they could be talking about so many things. Check out some examples of AI tools that are especially popular and their flaws:

Text-generation tools create content based on certain keywords (or “prompts”) you define. They are trained on large amounts of text from the internet, of varying degrees of quality. You might hear these referred to as “large language models” (LLMs) or by specific product names like ChatGPT, or even more casual terms like “chatbots” or “AI assistants.” While these tools have been known to achieve feats of human-like intelligence, like aceing exams, they’re also known to “hallucinate,” meaning they also generate text that is inaccurate.

Image-generation tools create pictures or videos based on certain keywords you define. You might hear about these referred to as text-to-image models, or even by specific product names like DALL-E or Stable Diffusion. These tools can produce incredibly believable images and videos, but are also known to reduce the world to stereotypes and can be used for sextortion and harassment.

Recommendation systems show you content that they “predict” you’re most likely to click on or engage with. These systems are working in the background of search engines, social media feeds, and auto-play on YouTube. You might also hear these referred to as algorithms. These tools can give you more of what you’re already interested in, and can also nudge you down certain dangerous rabbit holes. Recommendation systems are used in important decisions like job hiring, college admissions, home loans, and other areas of daily life.

While some experts believe AI tools, like chatbots, are getting “smarter” on their own, others say they’re full of mistakes. Here are some reasons why you might want to think about the biases behind AI:

  • Some of the data they’re trained on might be personal, copyrighted, or used without permission.
  • Depending on the data sets, they might be full of hate speech, conspiracy theories, or information that’s just plain wrong.
  • The data might be biased against certain people, genders, cultures, religions, jobs, or circumstances.

AI tools are also trained on data that leaves stuff out altogether. If there’s little or no information about a group of people, language, or culture in the training data, it won’t be able to generate any answers about them. A key 2018 study by Joy Buolamwini called “Gender Shades” identified how widespread facial recognition systems struggled to identify the faces of People of Color, especially Black women. By the time of the study, these flawed tools had already been used routinely by police in the United States.

Shine a spotlight on bias to avoid reproducing it

Now that you know about some of the weaknesses that can exist in AI data sets, which are built by people like us, let’s take a look at ourselves. How can the way our human brains work shed light on AI's biases?

There are types of biases that are deeply ingrained in individuals, organizations, cultures, and societies. Shine a light on them by reflecting on these questions:

  • How do you expect others to present themselves, including how they behave, dress, and speak?
  • Are there any groups that face more risk, punishment, or stigmatization because of what they look like or how they behave, dress, or speak?

The biases you just reflected on often rely on assumptions, attitudes, and stereotypes that have been part of cultures for a very long time and can influence your decision-making in unconscious ways. This is why they’re called “implicit biases” — they’re often hardwired into your mindset, difficult to spot, and uncomfortable to confront.

Common implicit biases include:

  • Gender bias: the tendency to jump to conclusions regarding people from different genders based on prejudices or stereotypes.
  • Racial and/or ethnic bias: the tendency to jump to conclusions regarding people based on the color of their skin, cultural background, and/or ethnicity.

Harvard has a huge library of implicit bias tests you can try for free online to see how you do and which areas you can work on. With a lot of implicit biases, it can feel like a journey to even identify those beliefs. It’s unlikely to happen overnight, but why not start now?

Everything is m(ai)gnified

Now that you’ve seen common examples of these thought patterns and implicit biases, imagine what they might look like on a much larger scale. Thought patterns and implicit biases such as these can affect not only individuals but whole groups of people, especially when they get “hard-coded” into computer systems.

Using the free text-to-image generation software Perchance.org, the prompt “beautiful woman” returns the following results:

AI images generated on Perchance.org on August 13, 2024. Images supplied by Tactical Tech.

If the tool created six images of “beautiful women,” why do they all look almost identical?

Try it yourself — do your results differ?

Bigger studies have been conducted on this topic, with similar results. You can read about one such study and see infographics here: “Humans are biased. Generative AI is even worse.”

AI tools are not neutral or unbiased. They are owned and built by people with their own motivations. Even AI tools that include “open” in their name may not necessarily be transparent about how they operate and may have been programmed with built-in biases.

You can ask critical questions about how AI models are built and trained to get a sense of how AI is part of a larger system:

  • Who owns the companies that create AI models?
  • How do the companies profit?
  • What are the systems of power created or maintained by the companies?
  • Who benefits from the AI tools the most?
  • Who is most at-risk for harm from these AI systems?

The answers to these questions might be difficult or impossible to find. That in and of itself is meaningful.

Since technology is built by people and informed by data (which is also collected and labeled by people), we can think of technology as a mirror of the issues that already exist in society. And we can count on the fact that AI-powered tools reinforce power imbalances and systematize and perpetuate biases, but more rapidly than ever before.

As you’ve learned, flawed thought patterns are totally normal and everyone has them in one way or another. Starting to face the facts today can help avoid making mistakes tomorrow, and can help you identify flaws within the systems, like AI.

]]>
How artificial intelligence can be weaponized for harassment https://globalvoices.org/2024/12/26/how-artificial-intelligence-can-be-weaponized-for-harassment/ Thu, 26 Dec 2024 05:38:31 +0000 https://globalvoices.org/?p=826133 Beyond the popularity of AI emerges a significant issue: it supercharges harms, including non-consensual intimate imagery (NCII)

Originally published on Global Voices

 

Illustration by Tactical Tech, with visual elements from Yiorgos Bagakis and Alessandro Cripsta.

This article was written by Safa Ghnaim in collaboration with Goethe-Institut Brazil and originally published on DataDetoxKit.org. An edited version is republished by Global Voices under a partnership agreement. 

Artificial Intelligence (AI) tools — especially those that generate images, videos, and audio — have been marketed as “creativity apps” for individuals and “efficiency tools” for businesses, but there are few ways to control how they’re actually used and the harms that these artificially created visuals can cause.

It may come as no surprise that AI tools are making the problem of online harassment even worse, such as through the creation and sharing of non-consensual intimate imagery (NCII) — these are intimate photos or videos, including nudity or sexually suggestive or explicit images exposing someone's real or AI-generated body, shared without their permission.

NCII affects many people — not just the celebrities you might hear about in the media — and it’s not easy to deal with. Even platforms and law enforcement are struggling to keep up.

The problem is bigger than you might think

The technology is advancing so quickly that now it only takes one image of someone (and it could even be a totally wholesome picture) to be able to create sexually explicit content using one of many AI-powered tools.

While certain AI tools supercharge the problem of harassment, making it easy to create NCII of anyone, other AI tools are being used to tackle it. However, AI-powered tools that are trained to detect AI images are not perfect, and much of the work to identify and take them down still falls on the people working as content moderators.

One of the most resonant cases in 2024 involved sexually explicit AI-generated deepfake images of Taylor Swift. These images first appeared on 4chan and, within minutes, spread like wildfire across various social media sites. One of the images was seen over 47 million times before it was removed. It’s possible that these images are still being shared online since there is no real way to completely wipe them off the internet.

But this is not an isolated case. According to a 2021 study, 41 percent of over 10,000 survey respondents in the US said that they had personally experienced a form of online harassment. Of the respondents under 35 years old, 33 percent of women and 11 percent of men said they had specifically experienced sexual harassment online.

On a similar note, a 2023 analysis of over 95,000 deepfake videos found that up to 98 percent of them were deepfake pornography, and, of those, 99 percent of the individuals targeted were women. Other vulnerable groups, such as minors and LGBTQ+ people, are also disproportionately victims of online sexual harassment.

What steps can you take to protect yourself from this kind of harassment?

There are some guardrails in place by online platforms to support you in locking down your information from unwanted eyes. While these tips won’t build you an impenetrable fortress, they can make it harder for bullies or those trying to do harm to get to you.

Every platform is different and has settings and options accessible to users. As an example, here are a few things you can do to tighten controls on your Instagram and TikTok:

  • Set your profile to “private.” On platforms like Instagram and TikTok, you can set your profile to private so that only people who you approve as followers can see what you share in most cases. However, they can still see comments you make on other people’s posts and can even still send you messages. Learn to set your profile to private on Instagram and TikTok.
  • Remove followers or block people. If someone is giving you a hard time or making you feel uneasy, you can remove them as a follower or block them altogether. But if you know the person in real life, you’ll need a different strategy. Learn how to block people on Instagram and TikTok.

It takes a village

Looking at how many people are getting targeted by online harassment, it seems logical to assume there are a lot of harassers, too. But the fact is, it only takes one bad actor to do widespread harm. Would you consider someone who re-shares NCII a harasser, even if they didn’t originally generate the images? In the example of Taylor Swift, it took only a few people to create the NCII of her, but it wouldn’t have gone viral without many people sharing it.

So, how can you be an ally to someone who is being targeted and harassed? No matter which social media platform you use, it likely will have a “report” function. On apps like Instagram, you can report specific posts or entire profiles or accounts if you spot anything that looks like abuse or harassment. Reporting on Instagram is a great option to flag problematic people or things you see or hear. Instagram won’t share the identity of the person reporting with the person being reported.

When you “report” on Instagram, the platform may remove the post or may warn, deactivate, or ban the profile or account, depending on what happened and whether it goes against their Community Guidelines. Worth noting is that Meta’s Community Guidelines are not always helpful, and post takedowns and account bans have caused recent controversies.

If you personally know the person who is being targeted, gently reach out to them if you feel comfortable doing so. It might be that they had no idea this was happening and could react with strong distress, anger, or sadness. If you feel prepared to support them, you can offer them resources (like those linked at the end of this guide) and help them monitor and document the harassment.

Even though you most likely will not want to look at the harassment again, it might be helpful to document it before it’s taken down. Consider taking a video capture or screenshot of the post or comment that includes the account name of the other person and the date. Save the documentation somewhere safe and out of sight on your phone or computer. For example, on some phones you can set up a password-protected “secure folder.”

Documentation, when done well, can be useful in case the person being targeted decides to bring the issue to law enforcement and needs some sort of evidence.

It’s important that the person being targeted decides what they want to do. Do they want to contact law enforcement, get a lawyer, or reach out to their school, university, or workplace? Or would they rather keep it confidential as much as they can? Especially in a situation of NCII, so much choice is being taken away from them, so make sure to support them in getting back in control.

Know where to go for help

If you or someone you know is targeted with NCII, know that there are dedicated organizations out there who are ready to help. You don’t have to deal with it alone. Here are just a few supporting English-speakers:

  • Chayn (worldwide): Chayn provides resources and support to survivors of gender-based violence.
  • StopNCII (worldwide): StopNCII has a bank of resources as well as a tool that can help you get non-consensual intimate images taken down.
  • Take Back the Tech (worldwide): Take Back the Tech offers explainers and resources like Hey Friend, with tips on how you can support your friends when they are targets of harassment.
  • RAINN's National Sexual Assault Hotline (USA): RAINN provides a private hotline where you can chat online or call a staff member who has undergone crisis training.
  • Take It Down (USA): Take It Down helps you, step-by-step, to file a removal request for NCII.
  • Cyber Civil Rights Initiative (CCRI) (USA): CCRI includes step-by-step guidance and lists of US-based attorneys and laws. They have a list of international resources, too.
  • Revenge Porn Helpline (United Kingdom): The Revenge Porn Helpline gives advice to adults who have been targeted.
  • Umi Chatbot (Australia): The Umi Chatbot is a quick way to get information about how to deal with NCII. The website also has resources about collecting evidence and reporting.

For more tips and resources on how to deal with online harassment:

 

]]>
Private firms profiting from your vote: The role of the influence industry around the world https://globalvoices.org/2023/12/06/private-firms-profiting-from-your-vote-the-role-of-the-influence-industry-around-the-world/ Wed, 06 Dec 2023 14:33:12 +0000 https://globalvoices.org/?p=799426 Political parties outsource the influence of voters in 2024 elections

Originally published on Global Voices

Abstract Illustration with numbers and a vote sign.

Illustration created with the Information of The Influence Industry Explorer by Yiorgos Bagakis for Tactical Tech, used with permission.

Whether debating if AI-generated deepfake videos and ChatGPT will disrupt trust in elections or if social media platforms will be able to monitor violence-inciting misinformation, digital technologies remain a key topic for 2024 election campaigns around the world.

Digital campaign tools and tactics are especially difficult to monitor as political parties often outsource the work to opaque private companies, agencies, and consultants. In 2023, for example, the Communist Party of Nepal (UML) worked with a freelance consultant to develop their campaign strategy, the Frente de Todos in Argentina paid Digital Ads S.A.S to develop their communications content, and Indonesian political parties hired “buzzer” agencies to spread their message on social media. Worldwide, there are over 500 political consultants, software providers, data brokers, and technology firms that make up the influence industry. This industry stands to profit from one or more of next year’s national elections, including in India, Indonesia, Georgia, Mexico, South Africa, the UK, Ukraine, and the US.

In the run-up to an election, political parties will use crafted messages to influence voters’ opinions and actions. Political actors rely heavily on platforms, such as Google Ad Library for Argentina, Brazil, Chile, India, and South Africa or Facebook’s political Ad Library for over 200 countries. However, candidates also pay private and covert consultants to design their social media strategy and broader online and offline campaigns. The companies in the influence industry collect data on our locations, our opinions, and our behaviors, create voter profiles representing what they believe to be our political interest and leaning, and, from this information, design campaigns, communications, and content to encourage us to, or discourage us from, voting for a certain party. Not only are these firms hired to produce informative and accountable campaigns, but they can also be hired to spread misinformation or create disruption within elections. For example, the infamous firm Cambridge Analytica, and their close collaborator AggregrateIQ, were hired to spread politically divisive and violent content through social networks to intimidate voters in Nigeria.

Despite playing a substantial role in managing political participation, these firms are often able to work out of public visibility and disregard important democratic processes. In countries with electoral transparency systems — such as Argentina and the UK — within which political parties must declare their financial spends on election campaigns, the invoices often show very little information about the actual services the firms provide, and at times the information is purposefully obscured. In Bolivia, influence groups don’t need to worry about the concerns of collecting data themselves when they can buy datasets on cheap CDs from people who were previously employed to produce that data for someone else. Even with data protection laws, supposedly transparent firms can still create “anonymous profiles” that disconnect users from their original data but still use this data to produce profiles to identify individuals and groups.

Through understanding these firms, and their role in the complex and increasingly unstable landscape of digital politics, we can begin to hold political groups to account and make more informed choices on voting day.

Illustration 2024 elections: Indonesia, UK, India, South Africa, Mexico, United States. 500+ political consultants, software providers, data brokers and technology firms.

Illustration created with the Information of The Influence Industry Explorer by Yiorgos Bagakis for Tactical Tech, used with permission.

Should private firms take sides in politics?

The political ideology, especially the partisanship, of a firm has been important to the foundations of the data-driven influence industry. The media coverage of data-driven influence began in earnest after the success of data-driven tactics in Barack Obama’s grassroots-style campaigns for US president in 2008 and 2012. Many of the individuals involved went on to establish consultancy firms, including 270 Strategies, the Messina Group, and Blue State Digital, which all align themselves with “progressive” politics. In response to the visibility of these progressively aligned firms, Thomas Peters, a conservative blogger, wrote, “The only way to defeat Democrats was to learn from their tech advances, and then leapfrog them.” With this ethos in mind, he started uCampaign in July 2013 — a Republican party-aligned firm that develops campaign apps. In a similar case, Harris Media, a communications and marketing company, was founded by Vincent Harris, a conservative political strategist described as “the man who invented the Republican Internet.”

All of these firms have exported their work, resources, and politics worldwide, often gaining insights that benefit their agendas without impacting the places the consultant and staff of the firm live. For example, private firms have been testing tactics across various African countries before returning to France or the US. Their politics, in some cases, matches the “sides” they choose in their home country: Harris Media has worked with the UK Independence Party (UKIP), Alternative für Deutschland (AfD) in Germany, and Israeli Prime Minister Benjamin Netanyahu.

In contrast, Jeremy Bird, who founded 270 Strategies, worked with V15, a group opposing Netanyahu. In some cases, the companies work across various political groups, depending on who they have contacts with and who will pay for their services. For example, The Messina Group has worked with Enrique Peña Nieto, the former president of Mexico, Kyriakos Mitsotakis, the prime minister of Greece, and Mariano Rajo, the former prime minister of Spain. The US-based values of these firms — that is, the politics they support, and that which they see as legitimate, as well as that which they see as advantageous — are embedded in their work as they influence politics around the world.

These firms can earn vast quantities of money. According to the US Federal Electoral Commission, which is one of the few places to give an insight into the money spent on these firms, Harris Media earned over USD 1.12 million in the last three years from US political groups. Crosby Textor (now CT Group) has been involved in campaigns in Australia, Italy, Malaysia, the United Arab Emirates, Sri Lanka, and Yemen. According to the Electoral Commission in the UK, as of 2010, Crosby Textor has made over GBP 8 million from working with the Conservative Party and has also made several thousand-pound donations to the party.

While these firms profit from impacting our politics worldwide, they remain opaque and unelected, and many of these firms manage and influence the political direction of political campaigns — and, therefore, the political environment — in several different countries. The companies collect data on individuals based in one country, analyzing the information to build profiles they can use to the advantage of their work internationally. The intelligence they hold on citizens creates risks, including data breaches, misuses of data, and changes of hands in political governance — especially those which come during or after divisive conflict. Their business structure — and often values — are focused on profit: content that brings them ad revenue or appeals to political parties with funds, rather than principles of political practices.  The companies do not need to worry about whether voters are well informed, whether healthy deliberation is occurring among groups, or whether under-represented groups are being heard.

The rise of these firms, and the digital campaign tactics they support and engage in, is inextricably linked to increasing political polarization. This makes it important to understand and question the role of these firms in politics. Through asking questions, interrogating the firms, and building transparency, we can learn how to effectively regulate or manage our own political environments. The role of the influence industry is substantive, and identifying their political and profitable agenda is vital to understanding the magnitude of their power to influence political outcomes — and tensions — worldwide.

]]>
How companies collect private data about reproductive health https://globalvoices.org/2023/07/25/how-companies-collect-private-data-about-reproductive-health/ Tue, 25 Jul 2023 08:11:10 +0000 https://globalvoices.org/?p=792007 What are the consequences for people’s reproductive rights?

Originally published on Global Voices

An illustration of a grey hand over a purple and blue background with lines and dots.

Illustration by Tactical Tech.

This article was written by Stefanie Felsberger and originally published on TacticalTech.org. An edited version is republished by Global Voices under a partnership agreement. 

After the right to abortion was struck down in the United States in 2022, debates about the surveillance of people’s reproductive lives have proliferated globally. Much of the debate was focused on period tracking apps: how they could share user data and reveal a possible pregnancy termination, whether people should delete their app, or what apps were safe.

This focus on period tracking apps sidelined the deeper, systemic ways in which all people’s online interactions are tracked, “datafied,” and monetised, which can put people’s reproductive rights under threat. But, what are the intersections between data collection by private companies and the policing of people’s reproductive rights by different states? This article aims to shed some light on this ongoing discussion.

Many different companies worldwide track and gather data about reproduction, especially menstruation. This often takes place without people’s knowledge, and without their awareness of the data’s financial value, the extent of the tracking, or its harmful individual or collective consequences.

Data capitalism: What is the value of reproductive health data?

Today, the business models of most technology or internet-based companies are deeply connected to the large market for data. Most of these institutions aim to extract as much personal data from users and potential users as possible, from anywhere, by any means possible. Different actors then either sell or share this data or aggregate it and sell access to user profiles or insights into this data. This system also enables governments to gain access to people's private information. State institutions policing people’s reproductive or other rights can either request access to people’s private information from companies, require them by law to submit information through a subpoena, or even buy data sets on the market. Therefore, a wider focus on how companies collect data and monetize it needs to be part of the analysis.

Information on people’s menstrual cycles is very much part of the kinds of datasets that are shared and sold because menstrual and other health data can reveal intimate details about people, their bodies, and their habits. This information can be used for many different purposes: to gain insights into a person’s overall health, to create advertisements that try to exploit (real or perceived) hormonal fluctuations, or to determine whether or not someone is pregnant or trying to become pregnant — something advertisers and governments or even potential employers want to know. In addition, cycle-tracking data can reveal a lot about someone’s political convictions, which allows companies working in the influence industry to target people with political ads, thereby influencing election outcomes worldwide.

Illustration of hands interacting with a cellphone.

Animation ‘The Many Hands on Your Intimate Data.’ This project is supported by SIDA. Research by The Bachchao Project. Poster design by Tactical Tech, animation by Klaas Diersman.

A practice known as cycle-based advertising seeks to exploit the hormonal changes that take place during the menstrual cycle. One of the most crucial factors in advertising is knowing whether or not someone is pregnant. Having a baby means that someone’s (shopping) routines are reshaped for years to come. It is a crucial time for companies to ingrain themselves in the shopping habits of new parents, but especially women, because shopping for essential household items is part of household labor, something women have historically been made responsible for. For example, the North American retailer Target, which has been in this business for at least 20 years, made headlines in 2012 when the company allegedly knew about a teenager’s pregnancy before her parents. This is also reflected in the monetary value of pregnant people’s data: if an average person’s data is worth USD 0.10, a pregnant person’s data is worth USD 1.50. Therefore, being able to identify pregnant shoppers can help companies make millions in profit.

Period tracking applications can even facilitate companies’ and governments’ access to information about people’s reproductive lives. In a 2019 study, researchers revealed that Maya, a popular period tracker with more than 5 million downloads on Google Play, informs Facebook when a user opens the app. Crucially, Maya shared data with Facebook before users agreed to the privacy policy.

But period trackers are not the only companies in the business of monetizing data about reproduction. Advertising companies who sell location-based information also monetize data about pregnant people — and, by extension, can reveal who has visited or accessed an abortion clinic. Another example is SafeGraph, a company that has sold information about people visiting clinics that also provide abortions. Even though SafeGraph sold anonymised information about groups of people, they also included the locations people came from, how long they stayed at the clinic, and where they went afterwards. This set of information makes it easier to reverse the anonymisation of the dataset.

Data brokers, in general, seek to acquire all kinds of information about people in order to add it to different user profiles; often this also includes extremely sensitive information such as being a survivor of sexual assault or information about specific conditions, illnesses or treatments.

Animation created with the support of SIDA. Research by The Bachchao Project. Poster design by Tactical Tech, animation by Klaas Diersman.

While being targeted by advertisements might seem a benign consequence of companies having access to intimate data, it is not entirely so. These ads will not only be targeted at you to sell you products but can also inform ads promoting political candidates and ideologies. In addition, pregnancy-related ads are pervasive and “sticky.” Once identified as “expecting parent” and “female,” you see these ads everywhere — even if the pregnancy ends pre-term.

This information could also impact insurance rates if it is routinely collected by employers (in the US), as menstrual data can reveal underlying health conditions or if someone is trying to become pregnant. Pregnancy is still a reason why people are not promoted or hired; therefore, the collection of cycle-related data by an employer could also harm job prospects.

It’s not just about ‘choice’

The solution to this problem is often framed as one of individual or consumer choice and privacy. While informing individuals of what happens to their data and providing them with alternatives is a crucial step towards a solution, a discussion on how race, gender, class and ability limit what choices people can make must also be part of the conversation.

Individuals changing their tracking decisions does very little to mitigate the collective harms and downstream effects of this pervasive data collection of people’s reproductive lives. Even if you find the safest period tracker, which does not sell any data, your data could still be used to disadvantage others.

It is crucial to consider how the risks and obstacles we face online/offline are not the same. Marginalized groups are more at risk when it comes to surveillance and their reproductive rights have historically been grossly violated. It is crucial to center these histories, solidarity and mutual care in a discussion on pathways out of this overwhelming situation.

]]>