Legislation to tackle new-age Thoughtcriminals online.
NEW LAWS ON THE WAY
The federal government plans to introduce new laws to “help reduce the spread of harmful content on social media”, it has been announced this morning.
A press release by Communications Minister Paul Fletcher details the plan to introduce legislation that will give Australia’s media watchdog more power over tech companies who fail to meet the standards of a regulated disinformation code of practice.
The new laws, which are expected to be introduced to parliament later this year, will make it easier for the government to decide what type of practices will be needed to tackle the issue of ‘misinformation’.
It seems unlikely this push will change with a government switch.
Under the code, misinformation will be defined as false or misleading information that is likely to cause harm, while disinformation is false or misleading information that is distributed by users via spam and bots.
“Digital platforms must take responsibility for what is on their sites and take action when harmful or misleading content appears,” Mr Fletcher said.
”This is our government’s clear expectation—and just as we have backed that expectation with action in recently passing the new Online Safety Act, we are taking action when it comes to disinformation and misinformation.“
Paul Fletcher was a former Director of Optus before assuming the role as Communications Minister.
The move comes as the world’s most powerful tech companies try to combat what they say is a ‘deluge of misinformation and disinformation’ about the coronavirus pandemic.. and now the war in Ukraine online.
AUSSIES ‘EXPOSED TO MISINFORMATION’
This new move hasn’t happened overnight.
A 2021 report by the Australian Communications and Media Authority (ACMA) found “82 per cent of Australians had experienced misinformation about COVID-19 in the past 18 months“.
Following this report, the lobby group of the tech sector, DIGI, introduced a voluntary code of practice on disinformation and misinformation.
The voluntary code was established at the request of the federal government following the release of an inquiry into the market power of digital platforms.
DIGI members Facebook, Google, Twitter, Microsoft and viral video site TikTok have signed up to the code, which requires them to tell users what measures they have in place to stop the spread of misinformation on their services and provide annual ‘transparency’ reports detailing their efforts.
This is the reason you see so many flags and notices on social media.
DIGI attempted to strengthen the voluntary code in October by forming an independent board to police its guidelines and handle complaints that are deemed a “material breach”.
Now, the government hopes to introduce these practices in Parliament by the end of 2022.
The government says despite the code, websites such as Facebook, YouTube, TikTok and Twitter have been ‘filled with harmful content about the coronavirus pandemic and more recently the Russian invasion of Ukraine’.
So they have to step in and fix this once and for all.
The Australian government, or the Ministry of Truth?
Paul Fletcher issued a warning to the social media platforms earlier this month, urging them to immediately remove Russian state media content over concerns they were ‘facilitating the spread of disinformation and promoting violence over the invasion of Ukraine’.
An entire country has had their media purged, and in turn, Russia has purged access to the outside world. This is how quick information can be shut down in 2022, and that is the concerning element.
Look at how quickly access to information on both sides has disappeared.
Regardless of what is happening, or who is ‘good’ or ‘bad’, information should be available to all citizens to make their own informed perspective. Not for bureaucrats to decide what is ‘true’ or ‘false’.
Could this legislation potentially lead to a similar situation happening in Australia?
Under the new laws, the press release states that ACMA will be given information-gathering powers that will allow it to legally request tech platforms such as Meta (formerly Facebook), Google and Twitter to hand over information about content and of users that post it.
ACMA says this will help them ‘obtain data on complaints handling, issues they are acting on and engagement with harmful content’.
‘Harmful’ no longer means graphic or violent, it has become a broad definition relating to psychology which allows authorities to become the controllers of ‘what it true’ in textbook Orwellian style.
ACMA says a Misinformation and Disinformation Action Group – made up of stakeholders across government and the private sector – will also be established.
Nerida O’Loughlin, ACMA chairman, said there was more to be done to ‘ensure disinformation and misinformation’ did not spread online.
“In coming months the ACMA will focus on testing whether the self-regulatory arrangements put in
place by the industry are effective or whether further actions are needed,” Ms O’Loughlin said.
This is despite the fact the government has passed anti-encryption legislation, announced an overhaul of surveillance legislation, can force IP companies to block websites, has data retention powers and more.
Not to mention the voluntary groups already established, their ‘fact-checking’ armies, and a Department of Home Affairs taskforce already established to investigate these things.
Is this not enough already?
Of course not.
Thoughtcriminals will not be tolerated in the Brave New World Order.
Not only that, they will be stamped out.
You will not have a digital presence. Your opinions have led to you becoming ‘unpersoned’.
A good thing your social credit score will stop you from committing more heinous thoughtcrimes.
2 + 2 = 5, citizen.
For more TOTT News, follow us for exclusive content:
Facebook — Facebook.com/TOTTNews
YouTube — YouTube.com/TOTTNews
Instagram — Instagram.com/TOTTNews
Twitter — Twitter.com/EthanTOTT