Written by Annelise Dakin and Camryn Giles
“With great power, comes great responsibility”. Chances are, you have heard this iconic quote before. Initially, it may bring to mind the moral dilemmas faced by superheroes, or political figures and those in positions of authority. But here’s one scenario you may not have thought to apply it to yet; social media.
Australian Prime Minister, Scott Morrison, has threatened jail time for the executives of social media companies such as Facebook and Twitter. The proposed penalties follow the recent Christchurch terror attack, which was broadcasted to mass audiences online through the use of Facebook’s livestream feature.
The “legislative crackdown” will see Australian executives facing imprisonment if their company’s moderators should fail to effectively remove “abhorrent violent material” shared on their platforms. It is clear that the responsibility of preventing the weaponisation of digital platforms lies with the Government.
However, two questions still remain: who should get the blame? And are the proposed laws appropriate?
Let’s begin with the laws currently in place. The Australian Competition and Consumer Commission (ACCC) possesses the power to take legal action against business misuse of social media. However, social media remains to be a relatively ‘ungoverned’ space, with minimal legislation currently in place to discourage the sharing of violent material online.
Australian and international laws should work to support the viability of all social media organisations in this rapidly changing media landscape because Australia’s current media laws are severely outdated, with legislation referring to faxing and outdated mediums.
This is especially concerning when we consider the fact that social media isn’t a ‘new’ phenomenon; it’s been used by Australians for close to two decades. Establishing laws for the social media landscape is not only necessary but well overdue.
Social media provides citizens with a platform; one which affords them unprecedented accessibility to public audiences. It is clear that Facebook and other social media companies have continually struggled to meet moderation demands; however the Australian government has been reluctant to enforce regulation until now.
It is important that the legislation introduced works toward counteracting the source of the issue: public behaviour online. With the majority of social media companies being US based, Australian executives are left with limited control over content moderation.
There is a common misconception that social media companies are entirely negligent and disinterested in taking responsibility for the role they play. However, this is not the case. Throughout a recent article written for the Washington Post, Facebook founder and CEO, Mark Zuckerberg, publicly agreed that governance over social media is a global issue and that to ensure the safety of users, social media needs a “globally harmonised framework”. For this occur, governments need to take on a more active role in updating the rules and regulations for online behaviour.
There is no doubt that government intervention is required to prevent digital platforms from being weaponised. But here’s the thing; we are targeting the wrong people. The proposed legislation assumes the position that through threatening local executives with imprisonment, content moderation will improve ten-fold and we will no longer see this material appearing in our news feed.
This expectation is completely unrealistic. It fails to consider the real perpetrators: social media users. The reformed legislation should target the Australian public, through establishing legal guidelines on what it means to be a ‘good’ online citizen. When social media moderators finally removed the original video of the Christchurch terror attack, the violent footage continued to circulate online.
Facebook alone had to remove 1.5 million copies of the video within the first 24 hours of the attack. That’s 1.5 million users who actively shared the livestream. That’s 1.5 million users who need to change their online behaviour and be held accountable. They represent the source of the problem, a problem that will certainly not be resolved by the imprisonment of social media executives.
While social media companies should cooperate with governments through developing better operational systems where possible, we will see very little change if we do not begin to hold individual users legally accountable.
But of course, it isn’t all that simple. While holding Australians legally accountable for their online behaviour may appear to be the most appropriate solution, it will be a difficult one to implement. Governing bodies and victims of ‘cyber-attacks’ face the challenge of online anonymity; which has been seen to promote anti-social, deceptive and illegal behaviours online.
Currently, social media companies require little to no verification of personal identity, allowing users to establish a social media account under an alias. In recent years, public pressure has grown for social media companies to require identity verification from new users, with some suggesting that online anonymity should be banned altogether. It’s a possible solution, but a controversial one. Stripping users of the ability to conceal their identity will help to reduce online trolls and hold citizens legally accountable, but at what cost?
Australian society is becoming increasingly aware of the dangers associated with entrusting tech companies with private data, and recent Facebook security breaches have brought into question the company’s ability to keep personal data secure. If social media companies cannot prove themselves capable of securing user data, entrusting them with legal identification may threaten the physical and financial safety of users.
Regardless, it cannot be ignored that without an effort to reduce anonymity, the current measures taken by social media companies to prevent physical harm or risk to public safety will remain ineffective. Facebook currently disables the accounts of offending users, who are found to be sharing content that breaches community standards.
This will be effective for approximately five minutes, throughout which; the offending user will switch their browser, create a new Gmail account, and sign up to the same social media platform under a new alias. In short, you can take the user off Facebook, but you can’t take Facebook off the user. Requiring new users to verify their identity, would prevent prohibited users from re-accessing the site.
Considering that Facebook moderators deleted 583 million fake accounts throughout early 2018, this measure would significantly ease pressure on Facebook’s moderation resources.
The seeds for identity verification are already being sewn, with Facebook ruling that advertisers must verify their identities prior to purchasing political ads on the platform. Of course, it is unlikely that the broader Australian public will be welcoming of propositions that require them to provide legal identification in order to access such platforms.
Accustomed to the endless parameters of the internet, Australians may feel as though such measures are unreasonable. But here’s the thing; accessing social media isn’t a right; it’s a privilege and should be treated as such. Just as a citizen would be held legally accountable for their conduct in a physical public space, so too should they be held accountable in virtual public spaces.
The fact that social media remains to be an ungoverned space is unacceptable. However, threatening the imprisonment of Australian social media executives will do little to stop Australian citizens from sharing online content that threatens public safety. Only through holding each and every user legally accountable for their online behaviour, will we see a real change.