Internet users are attempting to take back power by messing with targeted ads. But just how effective are these tactics?
From blocking ads to faking their interests, internet users are attempting to take back power from invasive targeted advertising. But just how effective are these tactics?
Targeted online advertising is inescapable, with thousands of adverts popping up on the average person’s screen throughout the day. They’re often unnervingly personal – a sofa you looked at once will continue to haunt you for weeks, or after sending messages discussing a particular drink, similar beverages might become ubiquitous across every site you visit. Thanks to highly trained targeted advertising algorithms, your browsing history, follows, comments and clicks, are relentlessly used to try and sell you things.
More than just an invasion of privacy, the assumptions drawn by these algorithms about an individual’s interests are often sexist and racist. A recent study by Northeastern University found that “some users [are] less likely than others to see particular ads based on their demographic characteristics”. The research showed that job postings for careers like nursing and secretarial work were shown to more women, while men were shown adverts for taxi driving and janitorial jobs. This bias filters down to the products on offer, too: women will often be offered clothes and intermittent fasting apps, while men get adverts for cryptocurrency trading platforms and gambling websites.
This kind of bias, along with the relentless consumerism it pushes, permeates every facet of our existence – so it’s little wonder that people are beginning to look for ways to push back. Kat, age 23, recently shared her tactics in a now viral TikTok, where she talked about “training the algorithm like it’s a dog”. In the video, she discussed the practise of punishing apps that showed her things she didn’t like by “closing the app”, and rewarding content that she approved of by “taking a screenshot”.
While Kat admits to using these tactics partly to improve her shopping experience, she also deploys them to send a message to companies whose products or practises she finds unethical. “I punish the algorithm by reporting ads I don’t like as spam – because they are,” she told Huck via email. “I don’t like to give money to mega brands like Amazon or Walmart on principle, but Amazon still pays to be on every screen of mine nine times a day. I’m not mad at the concept of advertising, just the lack of taste the mega companies have – especially since the pandemic started. Over time, I’ve gotten much more interesting and effective ads.”
Anna H, an illustrator from Belfast, also began trying to train her algorithm during the pandemic. At first, she wanted to avoid being shown triggering content for the sake of her mental health. She then realised that she could start punishing companies whose principles she didn’t agree with, especially the exploitative fast fashion industry.
“I thought it was something I could do in support of the small businesses whose designs have been ripped off – these companies apologise, remove the designs then do it again to someone else a few months later,” she explains. “By blocking their ads, I feel like I’m supporting those who are working hard. It definitely bothers me more that these brands don’t properly pay their workers, and I also feel solidarity with other small business owners.”
Anna B, the founder of a creative agency, has also tried to push back against targeted advertising by reporting brands she doesn’t support – especially those with a sexist agenda. “I get so many weight loss, skinny fetish and other disgusting ads targeted to me all the time,” she says. “As a woman on social media it’s horrifying how easy it is to feel bad about yourself from the way ads are targeted to you.”
“It’s about messing with their ad spend and making them pay out on things that won’t bring a return,” Anna continues. “It’s a small victory, as they aren’t spending on the right things like taxes and proper wages.”
And there are serious risks associated with having your information collected by these advertising algorithms too, especially in the event of a data breach. The amount of information companies have on users is vast, and can contain highly personal information. “We’re talking about tons of data points here, including the user’s location history, gender, health conditions, personal interests, political affiliation, sexual orientation, race, relationship status, browsing habits, and more,” explains Attila Tomaschek, digital privacy researcher at Pro Privacy.
“Invasive targeted tracking is indeed a major encroachment on our civil liberties and fundamental rights to privacy,” Attilla continues. “These ad targeting mechanisms do not have the best interests of internet users at the top of their list of priorities. They’re there to generate revenue by collecting as much data as possible. This data is all too often misused, and can even be used against internet users in malicious, targeted phishing campaigns, for example, if the data is not properly protected and falls into the wrong hands.”
Young people are more attuned to these dangers than they are often given credit for. A 2020 survey conducted by F5 Labs found that 81 per cent of Gen Z-ers wanted more privacy online, and 93 per cent of millennials agreed. Despite growing up in a digital age where sharing their lives online is seen as the norm, they know their activity is being tracked and the majority feel uneasy about it.
And some of them are taking their punishment of the algorithm a step further – by deliberately attempting to confuse it. Gianna, 23, has been doing this for a while, as a way of throwing its recommendations off, testing its limitations and learning how targeted advertising works. “On YouTube, I’ll play a little game where I set out with something in mind that’s fully out of my usual interests and see how close the recommendations can get to it,” she explains. “I did that recently, by typing ‘dolls’ into the search bar, clicking on a video, then clicking on maybe two related videos, searching up ‘dolls’ again, then searching ‘baby dolls’. Now all my ads are for reborn dolls and the stuff used to make realistic dolls as well. I’ll see doll eyeballs for sale on Etsy in between my regular ads.”
Although messing with the algorithm can be fun, Gianna does find its immense power unsettling. “I think it’s the quiet collection that bothers me,” she says. “That phenomenon of saying things out loud and then having related items be pushed at you sometimes really gets to me. When I’m speaking with my roommate, and maybe one of us is talking about blenders, next thing you know we get ads for them on Instagram, Facebook or Snapchat.”
So how effectively can individuals fight against the omnipresent algorithm – can you really teach targeted advertising a lesson? Attila is sceptical: “The algorithm is strong enough to withstand punishments like blocking and reporting. You might see less adverts from specific companies, but it will still know personal information – even if you intentionally try to confuse it.”
If you truly want to push back against surveillance from big companies, and protect your information, you’ll need to adopt a more thorough method. “These algorithms are incredibly sophisticated and rely on much more than just what you click on when you browse the web to feed you those targeted ads,” Attilla continues. “You’re much better off using privacy tools like ad blockers and VPNs to throw algorithms off your scent.”
But when it comes to fostering a more positive social media feed, there is no harm in blocking and reporting ads that bother you. Anna H says that her plan has, to a degree, worked – she is now seeing fewer adverts that she feels the need to block. For her, ‘punishing the algorithm’ feels “like I have a bit of control over what my social media feed looks like. I’m no longer bombarded with brands that don’t pay properly and employ mostly women in dangerous work conditions. Weeding out adverts I don’t want to see does make it a happier place to be.”
Follow Anna Samson on Twitter.