The rise of social media – and, particularly, social media as many people's primary source of news – has seen the mass spread of misinformation and, subsequently, has had a significant impact on our society.
Edith Cowan University contributed to the Understanding Mass Influence report for the Australian Government's Department of Defence, which looked at global examples where online platforms were manipulated to the point of being able to sway election results.
Countries and appointed organizations were able to harvest users' information online and then used the data to specifically target these users to sway their behaviours, attitudes and perceptions.
We saw this with the Cambridge Analytica and Russian Internet Research Agency using sophisticated disinformation to influence the US elections a few years ago.
Identifying this misinformation is not easy, as it is often presented as news and/or fact. When done deliberately, this can be dangerous.
We are seeing that in content about COVID-19 vaccines, which undermines societal confidence in the vaccine and fuels the antivaxxers movement.
This negatively impacts society, divides the community, disrupts the economy and ultimately hurts public health.
Unfortunately, platforms won't really do much about misinformation; it is interesting content which enhances user engagement and time spent on the platform, which, consequently, increases profits.
But, while we are all at risk of our feeds being filled with fake news, we can take some steps to stop from falling down a rabbit hole of misinformation. Some things we can do in this regard are:
1. Assess the source of the (mis)information
Some questions worth asking:
- Is the source qualified to share this information?
- Has this source been right or wrong about topics they've posted in the past?
- Is there a risk for potential bias? What is the actual intent and what's in it for the source in sharing the information? What is their agenda?
It is also worth remembering, to be a credible source of information, an individual or organization should have credentials related to the field or topic being discussed.
For example, information about COVID-19 and vaccinations should be shared by an infectious diseases researcher or healthcare organisation specializing in the field, not a podcast host.
2. Validate the information
Ascertain whether the information is based on real data, recent research or science, or whether it is just a one-off case, anecdotal story or unfounded opinion.
You should also ask yourself whether it is based on published, peer-reviewed research, if the information is generating fear, anxiety or distrust, and whether it is pushing you towards an ideological position.
3. Review multiple sources
Don't trust just the one source, especially those that may seem controversial and especially when it comes to issues relevant to your life choices or purchase decision making.
The main way to check accuracy is to review multiple sources for the same information.
Good sources to validate information received on social media are government, industry and academic expert reports, papers, digital channels such as social media profile pages and websites.
4. Activate your "digital tribe"
Encourage your online community – or digital tribe - members to point out and identify misinformation to others in their extended networks.
Such advocacy will further add to the pressure for social media and governments to do something about the issue of misinformation.
It's happened before, with the recently launched Social Media Anti-Trolling Bill 2021 resulting from public pressure on the Australian government to do something about social media users who spread defamatory information about others.
It's a small step in the right direction – hopefully there are more steps to follow.
5. Be wary of confirmation bias
Social media algorithms create echo chambers and bubbles of reality.
Seeing more of the same content does not mean it's true; it just means the platform's algorithm has identified you as a potential target audience for that particular content, based on your preferences and past behaviour on the platform.
The algorithm will serve you more of that content, present you with only the one point of view. Seeing more of the same is then likely to impact on your attitude and behaviour.
Notably, the above points are not all exhaustive but are a good starting point for assessing whether the content you receive or are exposed to online, is in fact trustworthy or whether it's been shared to propel someone's ideological or political agenda.
Dr Violetta Wilk is a Lecturer and Researcher in Digital Marketing at Edith Cowan University's School of Business and Law.