Read the headline, tap, scroll, tap, tap, scroll.
It is a typical day and you are browsing your usual news site. The New Yorker, BuzzFeed, The New York Times, BBC, The Globe and Mail, take your pick. As you skim through articles, you share the best ones with like-minded friends and followers. Perhaps you add a comment.
Few of us sit down and decide to inform ourselves on a particular topic. For the most part, we pick up our smartphones or open a new tab, scroll through a favored site and click on whatever looks interesting. Or we look at Facebook or Twitter feeds to see what people are sharing. Chances are high that we are not doing this intending to become educated on a certain topic. No, we are probably waiting in line, reading on the bus or at the gym, procrastinating, or grappling with insomnia, looking for some form of entertainment.
We all do this skimming and sharing and clicking, and it seems so innocent. But many of us are uninformed about or uninterested in the forces affecting what we see online and how content affects us in return — and that ignorance has consequences.
The term “filter bubble” refers to the results of the algorithms that dictate what we encounter online. According to Eli Pariser, those algorithms create “a unique universe of information for each of us … which fundamentally alters the way we encounter ideas and information.”
Many sites offer personalized content selections, based on our browsing history, age, gender, location, and other data. The result is a flood of articles and posts that support our current opinions and perspectives to ensure that we enjoy what we see. Even when a site is not offering specifically targeted content, we all tend to follow people whose views align with ours. When those people share a piece of content, we can be sure it will be something we are also interested in.
That might not sound so bad, but filter bubbles create echo chambers. We assume that everyone thinks like us, and we forget that other perspectives exist.
Filter bubbles transcend web surfing. In important ways, your social circle is a filter bubble; so is your neighborhood. If you’re living in a gated community, for example, you might think that reality is only BMWs, Teslas, and Mercedes. Your work circle acts as a filter bubble, too, depending on whom you know and at what level you operate.
One of the great problems with filters is our human tendency to think that what we see is all there is, without realizing that what we see is being filtered.
The concept of filter bubbles was first identified by Eli Pariser, executive of Upworthy, activist, and author. In his revolutionary book Filter Bubbles, Pariser explained how Google searches bring up vastly differing results depending on the history of the user. He cites an example in which two people searched for “BP” (British Petroleum). One user saw news related to investing in the company. The other user received information about a recent oil spill.
Pariser describes how the internet tends to give us what we want:
„Your computer monitor is a kind of one-way mirror, reflecting your own interests while algorithmic observers watch what you click.“
Pariser terms this reflection a filter bubble, a “personal ecosystem of information.” It insulates us from any sort of cognitive dissonance by limiting what we see. At the same time, virtually everything we do online is being monitored — for someone else’s benefit.
Each time we click, watch, share, or comment, search engines and social platforms harvest information. In particular, this information serves to generate targeted advertisements. Most of us have experienced the odd sensation of deja vu as a product we took a look at online suddenly appears everywhere we go online, as well as in our email inboxes. Often this advertising continues until we succumb and purchase the product.
Targeted advertisements can help us to find what we need with ease, but costs exist:
Personalization is based on a bargain. In exchange for the service of filtering, you hand large companies an enormous amount of data about your daily life — much of which you might not trust your friends with.
The internet has changed a great deal from the early days, when people worried about strangers finding out who they were. Anonymity was once king. Now, our privacy has been sacrificed for the sake of advertising revenue:
What was once an anonymous medium where anyone could be anyone—where, in the words of the famous New Yorker cartoon, nobody knows you’re a dog—is now a tool for soliciting and analyzing our personal data. According to one Wall Street Journal study, the top fifty Internet sites, from CNN to Yahoo to MSN, install an average of 64 data-laden cookies and personal tracking beacons each. Search for a word like “depression” on Dictionary. com, and the site installs up to 223 tracking cookies and beacons on your computer so that other Web sites can target you with antidepressants. Share an article about cooking on ABC News, and you may be chased around the Web by ads for Teflon-coated pots. Open—even for an instant—a page listing signs that your spouse may be cheating and prepare to be haunted with DNA paternity-test ads. The new Internet doesn’t just know you’re a dog; it knows your breed and wants to sell you a bowl of premium kibble.
The sources of this information can be unexpected. Companies gather it from places we might not even consider:
When you read books on your Kindle, the data about which phrases you highlight, which pages you turn, and whether you read straight through or skip around are all fed back into Amazon’s servers and can be used to indicate what books you might like next. When you log in after a day reading Kindle e-books at the beach, Amazon can subtly customize its site to appeal to what you’ve read: If you’ve spent a lot of time with the latest James Patterson, but only glanced at that new diet guide, you might see more commercial thrillers and fewer health books.
One fact is certain. The personalization process is not crude or random. It operates along defined guidelines which are being refined every day. Honing occurs both on the whole and for individuals:
Most personalized filters are based on a three-step model. First, you figure out who people are and what they like. Then, you provide them with content and services that best fit them. Finally, you tune to get the fit just right. Your identity shapes your media. There’s just one flaw in this logic: Media also shape identity. And as a result, these services may end up creating a good fit between you and your media by changing … you.
Nicholas Carr also covers online information collection.
Notes that the more time we spend online, the richer the information we provide:
The faster we surf across the surface of the Web—the more links we click and pages we view—the more opportunities Google gains to collect information about us and to feed us advertisements. Its advertising system, moreover, is explicitly designed to figure out which messages are most likely to grab our attention and then to place those messages in our field of view. Every click we make on the Web marks a break in our concentration, a bottom-up disruption of our attention—and it’s in Google’s economic interest to make sure we click as often as possible.
Every single person who has ever spent time on the web knows how addictive the flow of stimulating information can be. No matter how disciplined we otherwise are, we cannot resist clicking related articles or scrolling through newsfeeds. There is a reason for this, as Pariser writes:
Personalized filters play to the most compulsive parts of you, creating “compulsive media” to get you to click things more.
In an attention economy, filter bubbles assist search engines, websites, and platforms in their goal to command the maximum possible share of our online time.
In particular, the existence of filter bubbles has led to widespread concern. Pariser writes:
Democracy requires citizens to see things from one another’s point of view, but instead we’re more and more enclosed in our own bubbles. Democracy requires a reliance on shared facts; instead we’re being offered parallel but separate universes.
… Personalization filters serve a kind of invisible autopropaganda, indoctrinating us with our own ideas, amplifying our desire for things that are familiar and leaving us oblivious to the dangers lurking in the dark territory of the unknown.
Pariser quotes Jon Chait as saying:
Partisans are more likely to consume news sources that confirm their ideological beliefs. People with more education are more likely to follow political news. Therefore, people with more education can actually become mis-educated.
Democracy requires everyone to be equally informed. Yet filter bubbles are distorting our ideas of the world. In a paper for Princeton University, Jacob N. Shapiro revealed the extent of the influence on our voting:
The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation. We call this type of influence, which might be applicable to a variety of attitudes and beliefs, the search engine manipulation effect. Given that many elections are won by small margins, our results suggest that a search engine company has the power to influence the results of a substantial number of elections with impunity. The impact of such manipulations would be especially large in countries dominated by a single search engine company.
Filter bubbles do not just occur on the internet. Shapiro provides an example from a decade ago of TV shifting the results of elections:
It is already well established that biased media sources such as newspapers, political polls, and television sway voters.
However, Shapiro believes the internet has a more dramatic effect than other forms of media:
Search rankings are controlled in most countries today by a single company. If, with or without intervention by company employees, the algorithm that ranked election-related information favored one candidate over another, competing candidates would have no way of compensating for the bias.
Biased search rankings would, in effect, be an entirely new type of social influence, and it would be occurring on an unprecedented scale. Massive experiments conducted recently by social media giant Facebook have already introduced other unprecedented types of influence made possible by the Internet.
Everyone, it seemed, agreed with each other. Except, they didn’t, and no one noticed until it was too late.
A further issue with filter bubbles is that they are something we can only opt out of, not something we consent to. As of March 2017, an estimated 1.94 billion people have a Facebook account, of which 1.28 billion log on every day. It is safe to assume that only a small percentage are informed about the algorithms. Considering that 40% of people regard Facebook as their main news source, this is worrying. As with cognitive biases, a lack of awareness amplifies the impact of filter bubbles.
We have minimal concrete evidence of exactly what information search engines and social platforms collect. Even SEO (search engine optimization) experts do not know for certain how search rankings are organized. We also don’t know if sites collect information from users who do not have accounts.
Scandals are becoming increasingly common, as sites and services are found to be harvesting details without consent. For example, Evernote came under fire when documents revealed that staff members can access documents, and Unroll’s nasty habit of selling details of user email habits led to criticism. Even when this information is listed in user agreements or disclaimers, it can be difficult for users to ascertain from the confusing jargon how their data are being used, by whom, and why.
Obama aired his personal concerns:
[We] retreat into our own bubbles, … especially our social media feeds, surrounded by people who look like us and share the same political outlook and never challenge our assumptions. … And increasingly, we become so secure in our bubbles that we start accepting only information, whether it’s true or not, that fits our opinions, instead of basing our opinions on the evidence that is out there.
Filter bubbles can cause cognitive biases and shortcuts to manifest, amplifying their negative impact on our ability to think in a logical and critical manner. A combination of social proof, availability bias, confirmation bias, and bias from disliking/liking is prevalent. As Pariser writes:
The filter bubble tends to dramatically amplify confirmation bias—in a way, it’s designed to. Consuming information that conforms to our ideas of the world is easy and pleasurable; consuming information that challenges us to think in new ways or question our assumptions is frustrating and difficult. This is why partisans of one political stripe tend not to consume the media of another. As a result, an information environment built on click signals will favor content that supports our existing notions about the world over content that challenges them.
Pariser sums up the result of extensive filtration: “A world constructed from the familiar is the world in which there’s nothing to learn.”
We have an inherent desire to be around those who are like us and reinforce our worldview. Our online behavior is no different. People form tribes based on interests, location, employment, affiliation, and other details. These groups — subreddits, Tumblr fandoms, Facebook groups, Google+ circles, etc. — have their own rules, conventions, in-jokes, and even vocabulary. Within groups (even if members never meet each other), beliefs intensify. Anyone who disagrees may be ousted from the community. Sociologists call this behaviour “communal reinforcement” and stress that the ideas perpetuated can have no relation to reality or empirical evidence.
Communal reinforcement can be positive. Groups geared towards people with mental health problems, chronic illnesses, addictions, and other issues are often supportive and assist many people who might not have another outlet.
However, when a group is encased within a filter bubble, it can lead to groupthink. This is a psychological phenomenon wherein groups of people experience a temporary loss of the ability to think in a rational, moral and realistic manner. When the members of a group are all exposed to the same confirmatory information, the results can be extreme. Symptoms include being excessively optimistic, taking risks, ignoring legal and social conventions, regarding those outside the group as enemies, censoring opposing ideas, and pressuring members to conform.
Shane Parrish
Udělej mi radost a pozvi mě na kávu. Opravdu mě potěší, když si ji jednou nebudu muset koupit sama.




Napsat komentář