With the end of net neutrality in the United States, the introduction of facial recognition, and a formal notice from the French National Data Protection Commission (CNIL), social networks are making the news, particularly Facebook, that is at the center of a scandal unlike any other. For the first time, former Facebook employees are testifying to the harmful effects of the product they helped develop. So what is wrong with Facebook and other social networks?
Respect for privacy
Oh, the eternal problem of user privacy! Facebook is a huge company, and is now being accused by the CNIL of sharing user information with its subsidiaries.
In 2016, WhatsApp changed its conditions of use. France was among the first countries to raise its voice on the matter, including the United Kingdom and Germany, who have also prohibited data-sharing for the purpose of targeted advertising. Since then, WhatsApp has attempted to justify information sharing, promising that data is used only for security purposes and to improve their services.
However, the CNIL accuses WhatsApp of not complying with the French Data Protection Act of 1978, and for not cooperating in complying with the law. Thus, the CNIL is of the opinion that user “consent” obtained through the application for the purpose of data sharing does not have any legal basis or value.
Several social networks have been implicated in breeches of user privacy in France or elsewhere, and include Tinder, Twitter, and Snapchat. The debate is becoming more heated, since Facebook has recently announced plans for using facial recognition which, according to the company, would improve security for its users.
An apparatus of influence
Social and political networks
In the 2016 U.S. presidential campaign, Facebook was accused of serving as a platform for political influence.
Today, most users of social networks receive their “news” on small ads on their walls, through what their friends share, or on Snapchat stories. What is criticized, particularly on Facebook but also on social networks in general, is that algorithms used to filter ads do not differentiate between “real news” and “fake news“. These algorithms do not always filter publications with defamatory content, since they favor content that elicits strong reactions, such as likes or sharing.
Donald Trump turned up the controversy during his presidential race, accusing the American television media of broadcasting “fake news” about him and his campaign. Though we are entitled to have our reservations about this statement, fake news should be taken seriously.
Social networks abound with fake news. It is often information which Internet users react to readily and, therefore, occupies a prominent position on the walls of these users.
In addition, advertising content can be broadcast by anyone. Yes, literally anyone can create a fake account and participate in the distribution of questionable content. That is what Russia was accused of doing in the last U.S. presidential election. Ten months after the election, Facebook revealed that 470 accounts had been created to disseminate content favoring Trump.
So don’t Internet users have common sense? Well, according to former Facebook employee Chamath Palihapitiya, the answer might not be that cut and dry.
Palihapitiya is a former Facebook employee whose testimony surfaced early in December 2017, quickly spreading around the globe.
In a video, he expresses feeling guilty for having participated in the development of “tools that tear apart the social fabric”. His statement is based on recent studies that state that reactions to publications (likes, sharing, retweets etc.) play an important role in “dopamine based reaction loops” – dopamine being a substance produced in the brain that is responsible for producing addiction. Thus, according to him, social networks “destroy society”, since users have become dependent on the content they post and react to.
In addition, Palihapitiya accuses social network founders Mark Zuckerberg (Facebook) and Kevin Systrom (Instagram) of knowing what they were doing, exploiting a loophole in human psychology to sell their products. Facebook countered that, on the contrary, their company was primarily used to strengthen human social ties.
According to Upworthy CEO Eli Pariser, social networks tend to trap Internet users in “filter bubbles“. That is to say, algorithms are constructed to display content that generates the greatest reaction in our social circle. For example, Facebook’s algorithm generates publications on our walls as a function of what is most popular among a certain circle of friends. As a result, users do not have access to opinions other than their own, since people tend to associate with others who share their ideas.
Notably, Facebook is being accused of participating in the dissemination of radical ideas and for favoring one U.S. presidential candidate over another.
What is the solution?
Facebook and other social networks have been forced to respond to these accusations and to at least state that the situation is a delicate one.
As for Facebook, the company has announced that it has increased its number of moderators. The amount of defamatory content and fake news should, therefore, logically decrease. Publications depicting murder or suicide are also expected to become less common, causing a scandal in 2017.
Lastly, Facebook is a social network and not a classic media outlet. It is therefore difficult for the company to “choose” its editorial content, since it’s not managers who are choosing, but rather users, who decide by what they click on. Facebook algorithms then run and decide to show only publications that have been clicked on previously.
So the decision is up to us, the users. How should we behave on the Internet then? Palihapitiya says, “Now it’s up to you to decide what you want to give up, how much of your intellectual independence you’re willing to give up.”