In the book Nineteen Eighty-Four, humanity lives inside of a dystopia wherein a person or persona called ‘Big Brother’ watches everything they do, and a centralised platform pushes party agendas continuously through propaganda, spying, monitoring, and thought controls.
As we approach the 70th anniversary of the book, the threat we face today doesn’t exist from some direct ultra-fascist government or political party.
It exists in stealth – from an oligarchy of social media giants focused on the continued suppression and manipulation of public information, and the mass collection of personal information for ‘ministry’ databases across the world.
Now, as Facebook announces new plans to end the spread of ‘fake news’, new facial recognition partnerships and a continued attack on our most venerable, Ethan Nash examines this fast approach to an Orwellian future.
THE CONTROL OF INFORMATION
NEW: InfoWars has now been banned from social media including Facebook and YouTube, lawyers say “hate speech” trial will “redefine free speech in the age of the internet“.
NEW: It has been revealed private data firm Cambridge Analytica has gained access to the information of over 50 million Facebook users. Read more on this new developing story here.
Facebook’s advertising has recently become a focus of national attention as of late, prompting the organisation to introduce new privacy measures and advertisement standards to combat ‘the spreading of hate’ on the social networking website.
The site is planning to hire 10,000 new staff to work on “safety and security”, and plans to reduce ‘fake news’ impressions by 80% by introducing new tools to make it easier to report content.
The new tools include the ability to report any ad or post as “false news” by tapping the three-dot button next to it, with Facebook using third-party fact checkers to check the post’s ‘veracity’ and reduce impressions by 80% for any party found to ‘mislead the public’.
Facebook has also started displaying “trust indicators”, a new feature that enables publishers to display information including their “ethics policy, corrections policy, fact-checking policy, ownership structure, and masthead.”
The controversy began after Facebook disclosed last year that it had “discovered $100,000 worth of ads placed during the 2016 presidential election season by ‘inauthentic’ accounts that appeared to be affiliated with Russia”.
Scrutiny on the company has since been maximized after violent protests in Charlottesville lead to ‘Jew hater’ advertisements by ‘right-wing groups’ being approved, leading Facebook and other tech companies to vow to strengthen their monitoring of ‘hate speech’ on the platform.
Facebook CEO Mark Zuckerberg wrote at the time that “there is no place for hate in our community,” and pledged to keep a closer eye on posts and threats of violence on Facebook.
“It’s a disgrace that we still need to say that neo-Nazis and white supremacists are wrong — as if this is somehow not obvious,” he wrote.
In 2018, 75% of all news traffic comes from social media, meaning that Facebook has a large stake in what information most of the 2 billion users on the site are presented. As existing algorithms and complex coding continue to filter this vital flow of information, many are asking:
Is this simply yet another case of The Hegelian Dialectic at work??
To examine, we took a quick look at Facebook’s automated ad system to see if “Jew hater” was really an ad category, and it was. The strange thing, though, is the category — with only 2,274 people in it — was too small for Facebook to allow us to buy an ad pegged only to ‘Jew haters’.
Instead, Facebook’s automated system suggested ‘Second Amendment’ as an additional category that would boost our audience size to 119,000 people, presumably because its system had correlated gun enthusiasts with anti-Semites.
Furthermore, the FBI has already stated that none of the investigations so far have found any conclusive or direct link between Mr. Trump and the Russian government.
Could this be the ‘Problem, Reaction, Solution’ scenario Facebook needs to justify a systematic attack on the free press and independent media?
BIG BROTHER IS WATCHING YOU
In addition to the suppression of public information, Facebook are now also facing privacy concerns over access and storage to personal information – even that which isn’t displayed publicly.
Facebook has recently confirmed that it has acquired Confirm.io, in a move that raises concerns for the future of accessibility to the social media website.
The startup offered an API that let other companies quickly verify if someone’s government-issued identification card, like a driver’s license, was authentic.
This has led many civil rights advocates to make the assumption that Facebook is looking to incorporate biometric identification into its login process, after Facebook was caught testing a feature that let you unlock your account using a selfie.
Facebook has also announced plans to use facial recognition technology to notify users if someone else uploads a photo of them as their profile picture, which the company said may help reduce impersonations and lead to greater security on the site.
Through this, the company is working closely with the eSafety Office, with Australia being one of four countries participating in a limited global pilot with Facebook that will help prevent intimate images of Australians being posted and shared across Facebook, Messenger, Facebook Groups and Instagram.
“We’ve been participating in the Global Working Group to identify new solutions to keep people safe, and we’re proud to partner with Facebook on this important initiative as it aims to empower Australians to stop image-based abuse in its tracks,” said Julie Inman Grant, eSafety Commissioner.
Facial recognition technology has been a part of Facebook since at least 2010, when the social network began offering suggestions for whom to tag in a photo.
The move is the next step in a long list of tech companies putting in place a variety of functions using facial recognition technology, despite fears about how the facial data could be used. In September, Apple revealed that users of its new iPhone X would be able to unlock the device using their face.
In 2017, Facebook was fined €110 million ($168 million) by the European Commission for “misleading” users about how data is shared between Facebook and WhatsApp.
Facebook’s most recent transparency report for 2015 also reveals it received 30,000 requests from police for data and has a 81% compliance rate, up 2.3% from the same time period in 2014.
Facebook’s privacy principles, which are separate from the user terms and conditions that are agreed when someone opens an account, range from giving users control of their privacy, to building privacy features into Facebook products from the outset, to users owning the information they share.
Many privacy groups have condemned move towards facial recognition identification systems, citing previous privacy breaches as a concern for the new partnership.
ATTACK ON CHILDREN
Child development experts and advocates are urging Facebook to pull the plug on its new messaging app aimed at kids, after Facebook launched the free Messenger Kids app in December, pitching it as a “way for children to chat with family members and parent-approved friends”.
The app works as an extension of a parent’s account, and parents get controls such as the ability to decide who their kids can chat with.
A group letter sent to CEO Mark Zuckerberg argues that younger children — the app is intended for those under 13 — aren’t ready to have social media accounts, navigate the complexities of online relationships or protect their own privacy.
In light of the app launch, a variety of experts and technology insiders begun questioning the effects smartphones and social media apps are having on people’s health and mental well-being — whether kids, teens or adults.
Sean Parker, Facebook’s first president, said late last year that the social media platform exploits “vulnerability in human psychology” to addict users. A chorus of other early employees and investors piled on with similar criticisms.
In fact, a study of nearly 200 adolescents in Korea showed that those who were very high users of smartphones had significantly more problematic behaviours, including somatic symptoms, attentional deficits, and aggression, than did those who were low users.
In addition, the investigators note that the effects of smartphone overuse were similar to those of Internet overuse. Internet use gaming disorder has been included in Section 3 of the just-released fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), the section of the manual reserved for conditions considered worthy of further research.
“Messenger Kids is not responding to a need – it is creating one,” the letter states.
“It appeals primarily to children who otherwise would not have their own social media accounts and is targeting younger children with a new product.”
Led by the Boston-based Campaign for a Commercial-Free Childhood, the group includes over 100 psychiatrists, pediatricians, educators and the children’s music singer Raffi Cavoukian.
What are they doing to the next generation?
Stay tuned for follow up pieces on Facebook’s involvement in funding VR and AI technologies, and more.
For more TOTT News, SUBSCRIBE to the website on the right hand panel for FREE and follow us on social media for more exclusive content: