Shaping Minds without a Third Party
In AD 1440, invention of the printing press by a German Goldsmith named Gutenberg enabled the spread of ideas and thoughts worldwide. Mankind has come a long way since. The attempt to shape people’s minds has been a constant endeavour from the day the first printed newspaper was circulated. While the spread of ideas and opinions is an essential feature for an informed populace, it is also important that the medium remains accessible and easy to operate. It should provide equal opportunity for people of all strata to express. With the advent of electronic mass media, the medium became undemocratic as the technology was out of reach to many. Only governments, big corporates, and companies could afford it. The development of persuasion as a science and discoveries in mass communications opened up a new era of influencing public opinion. Acceptance of democracy as a system necessitated that the science of influencing be used and misused in many ways. Technology has always been an equal partner in this process. The recent growth of populism in many parts of the world stands testimony to this. We now recognize the dangers of private electronic mass media and are even appalled at the noisy, rhetorical debates and biased anchors pushing their agenda. With the advent of social media, it was hailed as an antidote to mass media propaganda and a democratic medium where every individual has a choice to express oneself. But we often miss out on the silent, yet potent effects social media has on targeted persuasion. It is imperative that one is apprehensive of both the Arnabs and the algorithms of social media.
Allegations on Facebook
Recently, Congress leader Shri Rahul Gandhi had alleged that Facebook and Whatsapp have been siding with the ruling BJP on various occasions. He tweeted that
“BJP & RSS control Facebook & Whatsapp in India . They spread fake news and hate through it and use it to influence the electorate.”
The allegation was based on an article by the Wall Street Journal accusing Facebook of partisan conduct. The article revealed that ex-employees of Facebook confirmed that they were explicitly directed to avoid using the community restriction guidelines against posts favouring the BJP followers. The article pointed at the intervention of Ms.Anki Das who is the Public Policy Director of Facebook, India. According to the report, Ms. Das had the responsibility of overseeing “a team that decides what content is allowed on the platform”. A specific case of one T.Raja Singh of Telangana was pointed out. The Wall Street Journal posted “ T. Raja Singh has said Rohingya Muslim immigrants should be shot, called Muslims traitors and threatened to raze mosques” and “By March of this year, they (Facebook) concluded Mr. Singh not only had violated the company’s hate-speech rules but termed as “Dangerous”, a classification that takes into account a person’s off-platform activities, according to current and former Facebook employees,”
Anki Das then intervened to stop the enforcement of restrictions, quoting that punishing the BJP members “would damage the company’s business prospects in the country“. It is to be remembered that India is the largest market for Facebook. WSJ also reported that Das showed the same favouritism towards BJP during the elections too. Facebook was quick to deny the allegation, but a simple reading of the track record of Facebook and its stand on issues involving data usage for various purposes puts a flashing red light on the face.
The Cambridge Analytica Fiasco
In 2018 Facebook was accused of allowing Cambridge Analytica, a British political consulting firm to harvest data of its users and their friends from the portal. The purpose was to build psychological profiles of the users. The profiles would then be sold to various campaigns and companies that are in the business of persuasion and campaign running. The CEO of Facebook brushed it aside terming it as a breach of trust. He insisted that this happened without the involvement of Facebook. But, the harvesting of personal data from Facebook by Cambridge Analytica was first reported in 2015 by Harry Davies, a journalist for The Guardian. Subsequently, many media outlets have reported about this till action was taken in 2018. The whole modus operandi came to light with the use of data for the Brexit campaign. The moot question is why didn’t Facebook act in spite of so many reports earlier?. Why did Mark Zukerberg who felt so very sorry for the leak in the congressional hearing vehemently denied and called all the allegations a bluff earlier? The data that was extracted included that of around 87 million Facebook users of which 70 million belonged to the United States. The data was confirmed to be used in the Ted Cruz, Donald Trump, and Brexit campaigns.
Measures after Cambridge Analytica
In the aftermath, Facebook claimed to have deleted hundreds of apps from its portal and also made a provision for the user to delete their history. In April 2018 Facebook opened up its community standards content rulebook to the public. For the first time, the world got to know the rules and regulations on which content was being allowed and disallowed on Facebook. “Safety, Voice, and Equity” were claimed to be the guiding principles of Facebook. After the congressional hearing in the US, Mark Zuckerberg the CEO of Facebook repeatedly apologized and claimed responsibility for what is posted on Facebook.
He guaranteed the US congress that his company would work on increasing the number of moderators and in improving the AI platform to enforce its guidelines. Yet a simple reading of the guidelines would show that it is full of confusing jargon and vague assertions.
Vague Community Standard Guidelines
The objectionable content chapter of the latest community standard guidelines of Facebook lists hate speech as a topic. Facebook claims that it defines hate speech as a direct attack on people based on some protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. It defines “attack” as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation. But this conclusive statement is not supported by the methodology by which it is to be done and is instead filled with amateurish explanations. Facebook further pushes all other questions to its official blog named ‘Hard Questions’ which is again filled with vague illustrations expounding that every decision is left to the moderators. There is always scope to point at human error in this endeavour. On the preciseness of the system, the Facebook blog has the following to say
“With billions of posts on our platform — and with the need for context in order to assess the meaning and intent of reported posts — there’s not yet a perfect tool or system that can reliably find and distinguish posts that cross the line from expressive opinion into unacceptable hate speech.”
If Facebook was serious in keeping its portal safe and unbiased then why did it not take concrete steps to put a precise policy in place?. The answer lies in facebook’s stand on using its data for electoral purposes. It is now known that Facebook had even preceded Cambridge Analytica and has been actually working on increasing its capacities with a focus on using elections as a business opportunity since 2015. Since 2016, Facebook has been actively embedding its staff along with political campaigns. It could be only inferred that after blocking hundreds of apps from collecting user data from its platform Facebook has taken matters into its own hands not losing a business opportunity.
It could be safely presumed that these vague guidelines were drafted with an intention to justify aberrations on human error when pointed. But what has been brought out by the WSJ article is not merely a moderator aberration. It is the involvement of a senior policy manager in clearly restricting these guidelines to be deployed for a particular party. This is shaping people’s minds without Cambridge Analytica. With 2.2 billion users worldwide and with a constant engagement of its users, Facebook has become the behemoth of mind control. By bringing in a vague community guideline draft it has recreated the erstwhile license raj which the governments had earlier. The conduct of the company, after the allegations of merely deleting the questionable posts rather than removing the official and initiating a detailed inquiry, has added to the doubt that there were strong policy directives.
It is important to understand that in India we are not talking only about electoral victory to a specific candidate. The point in the discussion is about the normalization of ethnic hate and violence. The recent occurrence of violence in Bangalore for a Facebook post is a good example. District authorities go through many such incidents relating to social media which has the potential to spread violence. Facebook should be serious about acting fair and should draft concrete guidelines and enforce it without any favoritism. It should also reign in on its more sinister product Whatsapp. Whatsapp overtakes facebook by many times in spreading fake news and hate posts. Facebook cannot simply push the onus to the governments of the day to make laws which could be implemented, but rather act in an ethical and fair way to control posts which are contrary to the public interest. But all indications show that Facebook has learned a profitable lesson from the Cambridge Analytica issue. By drafting vague guidelines and by recruiting dubious people at the helm it has set sail to shape the minds of people in a partisan way without a third party.
By manipulating the sentiments of the people FB doesn’t influence just emotions but also the elections to ensure who should rule you.
The ramifications are huge for our country and its democracy. Every election henceforth in India will be remote controlled by FaceBook.
The writer Sasikanth Senthil was a 2009 batch IAS officer, who resigned in protest against the BJP government’s policies. In his resignation letter, he said “fundamental building blocks of diverse democracy are being compromised”