Filter bubbles supercharged by social network sites and digital news platforms are widely seen as a problem. But somehow, everyone believes just “the others” are blinded by them. In this article, I will illustrate that filter bubbles may very well be humankind’s #1 challenge. The time has come to end this unintended but destructive consequence of artificial intelligence. I will show why simple regulation will not fix filter bubbles and suggest a concrete solution.
Take a moment and think back on the last ten years of your life: What has been your biggest personal learning? For me, it has been the value of compromise. I have seen too many of my tried-and-true convictions refuted or at least moderated in real life. Take for example minimum wage: Neo-classical theory told us it would only cut low-wage employment. Turns out reality is a lot more complex on the effects of minimum wage. Turns out reality is a lot more complex on a lot of things! Sorry for being a slow learner, but I finally started to understand why resilient societies are built on facilitating and sometimes enforcing compromise.
So how do we improve our ability to gain consensus? Recently, I came across a video of an experiment that really rocked my world: People in the street were asked about their opinions on a number of contested issues (like “violence used by Israel against Hamas is – or is not – morally defensible”). When showing the respondent her answers at the end of the short interview, the interviewer used a simple trick to present her the opposite answers, asking the respondent to elaborate. What do you think happened?
My guess would have been that the interviewers were beaten up in the street, but no! “A full 53% of the participants argued unequivocally for the opposite of their original attitude” (Hall, Johansson, and Strandberg, 2012).
Obviously, facts did not trigger this sudden change of heart, because facts are still subject to our very subjective interpretation: In their famous study “They Saw a Game: A Case Study, the psychologists Albert Hastorf and Hadley Cantril found that when the exact same motion picture of a college game was shown to a sample of undergraduates at each opposing school, each side perceived a different game, and their versions of the game were just as “real” as other versions were to other people.
What these experiments show is that our attitudes towards alternative viewpoints matter if we want to compromise. Good news: Those attitudes can be shaped (also shown by Leeper, Thomas, 2014). Public broadcasting (for all its deficiencies) has tried this for decades, at least to some degree, e.g., in the UK, Germany, and Japan.
Yet compromise is joining the list of endangered species these days. Polarization has been on the rise for decades, but it seems to have become a challenge of global proportions.
What is wrong with polarization you may wonder? There is evidence that a polarized environment “decreases the impact of substantive information“. In other words, facts no longer matter, party lines do. (Interestingly, scientific literacy does not inoculate against extreme viewpoints, while scientific curiosity – aka an open mind – seems to help).
Still not concerned? Some say that the laissez-faire COVID-19 response in some countries, the Brexit referendum, and of course U.S. presidential elections since 2016 have been shaped by the polarization that is fueled by “filter bubbles” on social network sites. (I am not going to withhold the potential counterargument that polarization has particularly increased in age groups that are less likely to use the internet.)
Runaway polarization risks political deadlock resulting in more global warming, more poverty, more violent fights for the proper distribution of wealth, water, and healthcare. It can also lead to more autocratic societies. It can lead to more hunger, violence, hatred, distrust, depression, and death. That’s why it is worth taking a closer look:
“Filter bubbles”(aka “echo chambers”) describe the increasing probability of
- you being only exposed to news that fit your current worldview and
- your personal news feeds becoming more and more extreme.
A fascinating, data-driven analysis by Mark Ledwich shows the traffic flows between various YouTube political channels suggested by the site itself. Apparently, social network sites have inadvertently fueled the growth of filter bubbles not only by providing an efficient means of content distribution for basically everyone (it is not without irony that you most likely read this article on a social network site) but primarily by using machine learning algorithms that dramatically exacerbate the problem. This is at the heart of this, so we need to dig deeper.
If you have not yet seen Netflix’s “The Social Dilemma” documentary, let’s take a brief look under the hood of your news feed. It’s worth spending a minute on the fundamentals of this phenomenon. Please bear with me and make an effort to understand this – it is important, really important. (Why? Because politicians around the world so far seemingly did not take the time to get it and consequently failed to act effectively!)
Naturally, digital media sites want us to keep reading, they want us to stay engaged. This is a perfectly legitimate objective for any commercial website because user engagement = time on the site = ultimately ad revenue. Since each of us responds differently to different pieces of content, they tailor each and every news feed. To do this for millions of different users, social network sites use deep learning artificial intelligence algorithms. These algorithms are constantly trained to predict the potential user engagement of every piece of content in your news feed. Training works like this: They take the content, language, and visual information of a post as input information, and then they measure actual user engagement (comments, shares, likes, etc.) as the desired outcome. Based on this closed feedback loop, the algorithms continuously predict what drives user engagement based on real-life data. This works just like Google being able to predict whether or not a picture shows a cat or a traffic light using examples that have been categorized by a human (“supervised learning”).
And this is the key reason why it is wishful thinking to assume that social network sites will fix this themselves: these algorithms are one of the cornerstones, perhaps THE key ingredient to their ongoing success!
One of the key triggers of user engagement is fake news because they travel “farther, faster, deeper, and more broadly than the truth” (as shown in the landmark study by Vosoughi, Roy, and Aral, 2018). That’s why they are prioritized by algorithms. But fake news is just one element of the problem. More importantly, extreme political views that reinforce users’ own opinions presumably follow the same path. That’s how they contribute to dangerous filter bubbles. Make no mistake: Social network sites are actively fighting fake news on various fronts, like restricting the activity of bots and adding friction to sharing certain news. But they would never abandon the core reinforcement logic that drives their news feed algorithms.
It seems like a classic “prisoners’ dilemma”: Each social network site has an overwhelming incentive to use these algorithms because everybody else does it, too.
The only way out? You guessed it. Someone has to force all of them to change. In comes government regulation.
However, in the past few years, much of the public debate and regulatory action has focused on the “fake news” aspect. For example, this year, France decided to establish a new anti-fake news agency to fight fake news coming from foreign sources (if you wonder what the 60 people initially assigned to this job can achieve, I am asking myself the same question, especially when you look at Facebook’s 10,000+ staff to fight illegal content…). Ahead of federal elections in Germany, Facebook is running an ad campaign on how they are fighting fake news:
Why the focus on fake news? Here is my little piece of conspiracy theory: Social network sites focus the discussion on fake news because that decoy is something that they can actually address. Few people seem to get that this is just a symptom of the underlying machine learning algorithms. Fixing fake news will not fix filter bubbles.
This April, the European Commission issued draft legislation on artificial intelligence and suggested “a regulatory framework for high-risk AI systems only”. However, the artificial intelligence that governs our news feeds on social network sites did not make it on the list of “prohibited” or “high-risk AI systems” (as outlined in Annex III), at least not yet. That needs to be fixed a.s.a.p. Also, the regulatory actions suggested (“requirements for high-risk Ai systems”) are very generic and focus on risk management procedures, leaving plenty of room for interpretation. If social network sites’ algorithms were to be added to the high-risk list, I would not be surprised to see this hashed out in courts for decades to come before anything happens.
We don’t have that much time anymore. We need to be much more specific when we, the citizens, address this key threat to consensus and we need to do this now.
A Counter-Algorithm for Content Display
Imagine a world…
- where digital media still give you the exciting content that you (don’t know you) want to see – but at the same time, they expose you to insights that challenge your existing beliefs in a constructive, effective manner,
- where social media fosters the effective exchange of ideas and debate by incentivizing respectful language,
- where citizens still have diverging interests, perceptions, and opinions, but are enabled to explore solutions that serve most of us.
We want to explore a solution that uses the power of machine learning instead of trying to fight or destroy it.
Science has already developed procedures for decades that effectively achieve consensus and change minds (Janis and King, 1954, recent and very relevant: Navajas, Joaquin, et al. 2019, corresponding TED Talk). Why should it not be possible to automate this and integrate it into the digital world? One challenge is that these concepts mostly rely on interpersonal contact. However, experts hypothesize that limited tweaks to algorithms may be sufficient to “limit the filter bubble effect without significantly affecting user engagement”.
Let’s summarize the scientific evidence on what we need to gain consensus:
Our starting idea is simple: To gain consensus, we need to learn to embrace the counter-arguments. But – and this is a fairly new and big “but” – research suggests that simply being exposed to counter-arguments in your news feed actually increases polarization instead of decreasing it (I routinely force myself to read articles in the Fox News app and I am living proof of that effect). This happens probably because content is mainly addressed to in-group peers. Consequently, this content tends to be extreme and insulting to dissenting opinions, because this drives engagement and group-think. However, this naturally also decreases the likelihood to convince others. As we learned from Navajas, Joaquin, et al. 2019, moderate opinions are much more likely to win over other people’s opinions.
Instead of simplistic rules and generic regulations like the one suggested by the European Commission, we suggest harnessing the predictive, self-optimizing intelligence of machine learning. This is what we think will work:
- The existing algorithms that govern the news feed stay untouched. This is necessary for any platform to remain engaging. Without these algorithms, any platform eventually becomes worthless because most content will be irrelevant for us. They fill our echo chamber with “filter bubble content”.
- Now we need to add “counter-content” that is effectively challenging our current beliefs (which are already reinforced by “filter bubble content”). How does this work? As described above, deep learning algorithms are trained to predict the engagement of any piece of content. The same algorithms can also predict whether or not a piece of “counter-content” is decreasing the likelihood of engaging with “filter bubble content”.
- The power of artificial intelligence will find persuasive tactics we may not even be aware of today. Think of it as two algorithms constantly hashing it out. Those algorithms can become much more effective than any televised U.S. presidential debate. Why? Because this algorithm will be trained not just to mobilize its own followers but also to convince other followers.
Interested in the details? Here is how AI veteran and expert Frank Buckler describes it:
- P denotes a person so that the algorithm can adapt to her interests.
- Let C be a set of information that describes a piece of content by using its text and visual information (“filter bubble content”).
- L(C, P) is the likelihood that person P will engage with content C and has to be maximized. The mathematical function that calculates L based on C and P today is shaped by social media’s deep learning algorithms. It is not necessary to understand how they work. It is important to accept that they can estimate any unknown functions that predict L based on C and P if P has interacted often enough with different kinds of content C in the past. The more the person interacts, the better the prediction becomes.
What we now suggest is to include more information:
- Let CC be a set of information that describes a second piece of content that is exposed to the person simultaneously or in close succession (“counter-content”).
- L(C | CC, P) is now the likelihood that P engages with C given CC is exposed and has to be minimized.
- The content C itself is minimizing the engagement with CC [=min L(CC | C, P)]. This makes sure that counter-content CC is contradicting and does not further exaggerate content C.
Is there a better solution?
Let’s summarize other potential solutions under discussion:
- Outlaw filter algorithms: As described above, this would impair the usefulness of content platforms so severely that this functionality is likely to happen illegally and/or indirectly. The same would happen if we outlawed filter algorithms just for politics or tried to ban political posts altogether.
- Introduce a mandatory “Driver’s License” (to use social network sites): While this may improve respectful language and help people to recognize fake news somewhat, it does not address the underlying problem: systematically misleading information and flawed learning through the selective presentation of information.
- Increase support of public broadcasting: Unless public broadcasters use similar algorithms they will never stand a chance against digital media platforms that supercharge their user engagement with deep learning algorithms.
- Mandate generic risk management for deep learning algorithms (like the proposed EU directive): Since these algorithms are mission-critical for the platforms’ success, generic legislation that leaves plenty of room for interpretation will inevitably result in decades-long court battles. Introducing a mandatory code of conduct for platforms’ use of deep-learning algorithms is likely to have the exact same effect.
This Article is Useless
…unless you comment and share it.
My intention in writing this article is to explore how we can change the world for the better. I want to directly influence policy-making on digital media. However, no article alone can achieve this. Only if readers comment and share this article, only if it becomes viral, will it have the chance ever to matter.
This is why I am asking you to comment and share your view.
This is why I am asking you to share this article as broadly as possible.
If you think this article is bogus, PLEASE COMMENT.
If you think more people should read this article, PLEASE SHARE.
If you agree with my conclusion that we need a smart solution like a counter-algorithm to save our world, PLEASE SHARE.
In any case, make up your own mind, but always remain curious.