Updated: Sep 1
By: Catherine Kane, Co-Director of Education Policy
We would all like to think that we are the best consumers of online information. That we can view everything on our social media feeds with objectivity and critical thought. But alas, psychology says otherwise. The events of 2020 have put us in an especially precarious position. Between the coronavirus pandemic, massive social upheaval, and the general election; misinformation has and will spread rapidly. The effects of misinformation on social media have been well documented and may very well have put our current president in the White House due to Russian interference in the 2016 election. When the stakes are this high, effective strategies to combat bad actors online are more important now than ever.
The beauty and danger of social media is in the algorithm: content that is tailored to the user’s likes is the majority of the feed. These posts tap into current biases and exploit them by providing simple solutions to large, complex world problems. This effect is called fluency. The more simple the information is to process, the more believable it becomes. Emotional reaction to misinformation is also a large factor at play. In times of crisis and uncertainty, humans become less skeptical. When viewing content that tugs at heartstrings or causes outrage, the believability of the information becomes much higher. A story that prompts a knee jerk reaction is less likely to be investigated further by the viewer because it taps into biases and manipulates reasoning because the information just seems so true. The way in which algorithms are set up purposefully feed users content that they would probably already agree with. This puts users in bubbles that are incredibly hard to escape because of cognitive dissonance. Cognitive dissonance is the negative feeling that is associated with being exposed to information that contradicts one’s current beliefs, this leads people to push that information away. By rejecting content that doesn’t align with one’s biases, the bubble of information people live in on social media becomes stronger.
Two-thirds of Americans get news on social media. In a space where pictures with friends and coverage of world events are mixed, what separates entertainment and information can become confusing. This problem is amplified by the mixing of opinion, theory, and straight news formats. There is a tendency of users to read headlines, not articles as most major news organizations have social media accounts where they give short summaries of their reporting. However, the ‘social media recap’ is not necessarily sufficient to stop inaccurate information from being spread by non-news actors posing as people with reliable information. A great example of this is #dcblackout that spread on Twitter during the Black Lives Matter protests in Washington, DC. Accounts claiming to be from Washington, DC were reporting, in a news like fashion, that the police had intentionally shut down all communication coming in and out of the city in hopes of concealing the officer misconduct. Other accounts also claiming to be from Washington, DC tweeted that there was no such information shutdown happening. However, it was too late: #dcblackout became trending on Twitter and quickly spread to other platforms like Instagram and TikTok. Researchers concluded that most of the accounts tweeting about the debunked blackout were likely bots aiming to stir up trouble when tensions were high and Twitter users were susceptible to misinformation about the nationwide Black Lives Matter protests.
With all the misinformation that is swimming about online, tech companies will likely never be able to keep up with it all. That means that users have to be responsible for fact checking and sourcing all the information they are seeing. Luckily, researchers and reporters have laid out the best strategies for combating misinformation in a time as important as now. The first to recognize is that bad actors who purposefully put out untrue content try and make their posts as emotionally triggering as possible. False stories on Twitter travel six times as fast as real ones; inflammatory content causes engagement and the ultimate goal for trolls, bots, and malicious users is visibility. In order to fact check posts seen on social media, there are a few key facets to investigate: the origin, purpose, and reliability of the content. Try and figure out who runs the account and what agenda they are trying to promote. If an account is trying to present themselves as professional, look for contact information or names of contributors. Look for an overarching message being promoted by the account; explore how accurate information might be manipulated to conform to a bias. Lastly, check to see if major news organizations are reporting on the story or if fact checkers have looked into the claim. There are an abundance of websites and reporters who are dedicated to fact checking information online who have most likely already analyzed the claim.
In order for young people to not grow up with a skewed worldview produced by misinformation on social media, it is on all users to promote accurate information and strategies to combat untrue claims. We all have a part to play in harnessing the power of social media to make the world a better place rather than divide us even further with explosive misinformation and no recourse to tackle it.
Sources: Washington Post, NPR, Brookings Institute, First Draft News, Time