CONTENUTI DEL BLOG

domenica 1 aprile 2018

DEMOCRACY À LA GOOGLE, FACEBOOK & YOUTUBE: SOME REFLECTIONS, by Lucas Malaspina

IN DUE LINGUE (Inglese, Spagnolo)
IN TWO LANGUAGES (English, Spanish)

When Mark Zuckerberg decided to offer emerging nations Internet.org, anger was not long in exploding. As Daniel Leisegang writes in “Facebook está salvando al mundo”, this project which emerged in 2013 was a humanitarian masquerade: to allow Internet access to a huge number of Third World citizens who still remain outside the global village.
The idea was to break the barriers that prevent, for example, two-thirds of the Indian population from joining Facebook. However, besides India, the project was aimed at a total of 100 more nations.
Accused of violating the neutrality of the network, Facebook had to change the name: Internet.org came to be called Free Basics and it had to leave India in 2015 due to the large amount of criticism it received.
Why? Because Facebook was not offering plain and simple Internet, but an application for mobile phones through which the lower income sectors of that country could access a limited version of the Internet.
The idea, originally driven by the spirit that “connectivity is a human right”, ended up showing that what Zuckerberg was proposing was laying hands on the gigantic data mass of a significant number of the world’s poor (for monetising them).
Who decided what services would be available in the application? According to Chris Daniels, vice-president of the company, the decision was taken by Facebook, the government of each country and the associated telecommunications operator.
We could be justified in saying that if “Internet is a human right”, with Free Basics Facebook only aims to regulate the “limited human rights” of half of the world’s population (which does not have access to the Internet).
Policies that actually widen the digital divide have little to envy in the model of North Korea, where the majority only has access to a modest local Intranet which has only 28 pages available and contents controlled by the government of Kim Jong-un (the exception, as is obvious, is the ruling elite).
Free Basics, which is in a very embryonic phase, counted about 40 million users in November 2016. In Latin America, Free Basics has already been implemented in three countries (over twenty have joined worldwide): Colombia, Guatemala and also Bolivia, whose inclusion in this programme highlights the insufficient discussion of the problems of the monopoly of information in the digital era by continental populism (or, in this case, its collaboration with/subordination to such monopolies).
Free Basics does not allow access to Google, the most popular search engine in the world, but to Bing (the search engine of competitor Microsoft, which owns shares in Facebook).
Now, what happens with the 49.6% (3,700 million people) that do have access to Internet tout court, without (apparent) restrictions, more than 90% of which are Google users? Can we really boast of using a truly free and “neutral” Internet?

Search Engine Manipulation Effect

The expression “Search Engine Manipulation Effect” (SEME) was used in August 2015 by Robert Epstein and Ronald E. Robertson, two American academics who demonstrated that it was possible to decide the vote of 20% or more of undecided voters depending on the results offered by Google.
In several articles and interviews, Epstein refers to his study and states that “in some demographic groups, up to 80% of voters” may change their electoral preferences according to the results offered by Google. In February 2016, the British media were the subject of a controversy over the interference of the search engine in the elections.
This is not just a problem of Western democracy. According to French intellectual Barbara Cassin, author of Google me: one-click democracy, Google is said to have given the Chinese government profiles of its users in that country, “which allowed identification and even the arrest of dissidents”.
To illustrate the ideological bias of search engines, Cassin states that “if, in a country other than China, one writes Tiananmen in Google, you will obtain data on the repression of demonstrators in that Beijing square in 1989 which left hundreds dead: but, if you write it in China, you will only get peaceful urban references to the square”.
Of course, Google does not admit this ideological bias implicit in its system, but the company’s recent policies to help “fight terrorism” in general – and the Islamic State (ISIS) in particular – specifically show how its power over the decisions of people works today.
Take Jigsaw, a Google pilot programme based on its customised advertising system, which has a zero commercial, but certainly political, objective. The plan is to locate users who are receptive to the message of ISIS and offer them a series of specific announcements, through which they are secretly redirected to contents that refute the thesis of ISIS and could help dissuade them from the idea of joining the ‘Caliphate’.
Few could object to Google convincing people to reject ISIS, but it is clear that this reveals that Google is far from being “neutral” or “objective” and, on the contrary, draws attention to the possibilities of manipulating the user.

Battle against fake news or censorship 2.0?

Times have changed, and with them also what we find on the Internet. In 2010, when searching politics on Google, only 40% of the results were provided by the media. In 2016, that percentage was close to 70%.
On April 25, 2017, Google announced that it had implemented changes in its search service to make it harder for users to access what it called “low quality” information such as “conspiracy theories” and “fake news”. Facebook also applied a similar policy.
Google said that the central purpose of the change in its search algorithm was to provide greater control in the identification of content considered objectionable. Speaking on behalf of the company, Ben Gomes stated that Google had “improved evaluation methods and made algorithmic updates” in order to “make more authoritative content emerge”.
Google continued: “we update our guidelines to evaluate search quality for providing more detailed examples of low quality web pages for evaluators to mark properly”. These moderators are instructed to mark “annoying experiences for the user”, including pages that present “conspiracy theories”.
According to Google, these changes apply unless “the query clearly indicates that the user is seeking an alternative point of view”.
Since Google implemented the changes in its search engine, fewer people have accessed left-wing, progressive or anti-war news sites. Based on the information available in an Alexa analysis, some of the sites that have experienced losses in ranking include WikiLeaks, Truthout, Alternet, Counterpunch, Global Research, Consortium News, WSWS, American Civil Liberties Union and even Amnesty International.
Interestingly, shortly before that Google decision, The Washington Post had published an article titled “Russian propaganda effort helped spread ‘fake news’ during election, experts say”. That article cited an anonymous group known as PropOrNot that had compiled a list of fake news sites spreading “Russian propaganda”. On April 7, 2017, Bloomberg News reported that Google was working directly with The Washington Post to “verify” articles and eliminate fake news.
This was followed by the new Google search methodology: of the 17 sites declared as “false news” by The Washington Post’s blacklist, 14 dropped in their world ranking. The average decrease in the global reach of all these sites has been 25%, and some sites saw it drop by 60%.
The suspicion that Google has allied with the powerful traditional media to discriminate against alternative and independent media is strengthened by linking these facts.
In addition to its own search engine, Google has control over YouTube, a company bought in 2006 (one year after its founding). Starting from a certain number of views, YouTube pays video producers for placing advertisements (ads) on their videos, acting as an intermediary between the big companies and them.
The most serious change in YouTube occurred as a result of reports such as that of The Wall Street Journal stating that ads appearing on YouTube videos showed extremism and hatred. When big advertisers such as AT&T and Johnson & Johnson withdrew their ads, YouTube announced that it would try to make the site more acceptable to advertisers by “taking a tougher stance on hateful, offensive and derogatory content”.
With these new algorithms, Google harmed producers of progressive and independent videos, provoking what they called adpocalypse (“apocalypse of the ads”). Basically, the mechanism ended up by condemning those alternative content and pushed video producers to avoid objectionable opinions or points of view … according to Google/YouTube policy standards.
The practices of Google in relation to algorithms that regulate search engines not only had political implications, but also commercial purposes. In the framework of its antitrust laws, the European Commission fined Google 2.7 billion dollars for manipulating them to direct users to its own purchasing service, Google Shopping, by making use of its dominant position.

The obscurity of algorithms: an elementary democratic problem

Cathy O’Neil, data scientist and author of the book Weapons of Math Destruction, warns of the “blind trust” placed in algorithms for obtaining objective results.
The architecture of Internet has a tremendous influence on what is done and what is seen; algorithms influence which content spreads most on Facebook and which appears on top of Google searches. However, users are not aware of this nor able to understand how data are collected and how they are classified.
While Free Basics was criticised for trying to give the disconnected of the Third World access to a second-class connection believing that the Internet is equal to Facebook, it cannot be denied that for “first-class” digital citizens Google is virtually the same as the Internet, because it is what makes it possible to access its contents in an organised way.
In this way, the obscurity of algorithms becomes an elementary democratic problem. After a decade of populist or progressive governments in Latin America, for example, no measures have been taken to control the power of these monopolies of information, while debate on this topic is long overdue.
Not even the left in developed nations has come up with a joint programme. One of the most urgent tasks facing us today is to politicise this issue.

[translation from Spanish by Phil Harris (for IDN-InDepthNews)]

In propagating and/or republishing this text you are kindly requested to quote the source: www.utopiarossa.blogspot.com