A decade ago the internet was used by bloggers and activists as a site for unbiased news and protest. The freedom conventional media lacked was compensated for online. Social media was in its early stages, at least locally, with few able to afford a computer and the necessary dial up internet connection. The ubiquity of smart phones and cheaper internet costs have changed internet usage and accessibility. It’s more common to joke about office workers spending their work hours on YouTube or Facebook.
The problem of social media in today’s filter bubble is that it could easily become an echo chamber reinforcing existing beliefs. A measured, sober post, usually a plea for rationality and inclusivity may make its brief round but it is quickly buried by a torrent of text and images that increasingly offer content that only incites extreme response. And extreme, positive or negative, is measured in engagement – likes, clicks, likes, comments and shares. It doesn’t matter how the post is taken, as long as it’s popular.
Social media has in some sense blurred boundaries that exist in the real world. The protests in 2011 and 2012 in North Africa and some parts of the Middle East had given many proof to the rhetoric of the power or social media to incite great social and political change. And at the time, that is how it felt. Of course the movement in virtual places mirrored real life grassroots activism and growing dissatisfaction among the populace. The ongoing protests in Hong Kong are also largely continuing due to the freedom the internet and social media allow. Protesters are able to notify each other online about the locations of riot police and facial recognition cameras throughout the streets of Hong Kong.
A local example of this can be the activism and support for the displaced people in Gedeo. Under reported in mainstream media, the violence of the Gedeo was largely made public through collective online action. A call for support in clothes, food and money attracted the attention of many and forced the government to acknowledge the displacement. The response was insufficient and superficial but eyes were drawn to the disaster nonetheless.
However, social media is not just a force for good. The Utopian ideal of technology, and especially the internet as a great unifier, a tool for democracy through equal access and open communication across geographic, economic and racial divides is quickly crumbling. Berhan Taye, senior analyst at Access Now and a researcher working at the intersection of tech and social justice, says there is a dangerous misconception about the power of technology.
“Tech is never neutral. It depends on how we use it. You can use it to organize genocide in Myanmar or to organize votes for a female candidate … For all the good it does we’re not good at conceptualizing the effect it has on marginalized people,” she explains.
Social media sites, especially Facebook are profitable based on advertising revenue. The European Union and its nations have been demanding more transparency and accountability from Facebook and other giant tech companies but countries in the Global South that do not account for much ad revenue do not hold sway over these companies.
For the nearly 3 billion users on Facebook there are about 40,000 content moderators or “market specialists” worldwide. Facebook famously refuses to disclose the exact number of moderators. It is also opaque about its choice of employees and news reports about poorly trained or biased moderators in several countries since 2017 are to be recalled. There are nearly 6 million Facebook users in Ethiopia and it is not known how many moderators there are or what languages they speak.
“I have been able to track down two Amharic speaking moderators but I haven’t been able to find out any more. I have tried contacting Facebook but I haven’t heard back,” says BefekaduHailu, activist and PEN International Writer of Courage, 2019. Moderators that understand several languages spoken in the country and the specific context of posts is necessary if the company is to ensure Community Standards are upheld. However, these Community Standards are often unclear and inconsistent and methods of ensuring them are questionable.
Algorithms designed to identify hate speech and false information on the site typically learn from content moderators that are prone to personal bias. Research conducted by Washington University found that content was 1.5 times more likely to be flagged as offensive when it was posted by an African American. A similar problem has been observed with activists calling out gender-based threats of violence online having their posts removed or getting temporarily banned from the site for not complying with Community Standards.
If Facebook and Twitter do not have the financial incentive to focus on countries like Ethiopia, does the burden fall on national governments? As governments around the world see the impact of social media in spreading disinformation and misinformation many have attempted to stymie the deluge. As many see growing polarization on social media as a tool for spreading division and nationalism, the Ethiopian government proposed Hate Speech and False Information law that can lead to sentences of up to 5 years. The draft law has faced a great deal of criticism locally and from the international community.
David Kaye, UN Special Rapporteur on freedom of expression,described the proposed law as “overboard, untethered to human rights standards and could just as easily reinforce social divisions as counter online ills.”
“The law is extremely problematic,” says Berhan. “There’s no need to waste taxpayer money when there is no proposed implementation plan. The law doesn’t understand how technology works and it is not based on evidence.”
There has been scant research into the spread of false information on social media within the Ethiopian context. There are a few journalists verifying claims made online and debunking posts but there has not been a large-scale research performed to see how pervasive the problem is and identify culprits spreading disinformation. Center for Advancement of Rights and Democracy is an independent body currently mapping the Ethiopian political landscape on social media trying to identify the sources of mis/disinformation and hate speech. Befekadu, who heads CARD’s project, explains trolling and twitter bots spreading false information increase in times of crises and leading up to the general election in May. CARD is currently organizing workshops with university students on how to identify false information and report it.
There are a myriad of issues regarding distinguishing actual news from false information or misinformation on social media. The anonymity online platforms provide may lead to more blatant expressions of hate. There is no way to visually distinguish credible content from questionable ones. These platforms have consistent design of font and image juxtaposition. It is impossible to tell apart actual news from an opinion, even when it is posted by credible news sources. Flagged content may stay online for several hours before a content moderator decides it needs to be taken down. These hours can means thousands of shares and retweets. The minute a post is taken down there are many that sprout up amplifying conspiracy theories through the cyber sphere.
Without robust news media to reliably give unbiased information regarding national affairs, people will continue to use social media as a news source. Social media has the ability to make events that are actually distant seem personal and immediate. An example of this can be activist Jawar Mohammed’s November Facebook post that garnered dozens to come to his aid and thousands more to rally online on his behalf but news of the event had caught the attention of millions and continued to dominate the news and unsettled many regions of the country.
“The toxicity of social media is that we are blaming a lot of people for what few are doing. People must understand that taking extreme measures may lead to censorship. I don’t want to be censored. Until we figure out who is behind this we have to adopt regulating mechanisms,” says Befekadu. He adds that although people do not trust the media they must still try to protect themselves from false information. Individuals should be vigilant when going through their newsfeed, aware that there is malicious content designed to sow disorder and brutality.
Media literacy is another measure both Befekadu and Berhan are behind. When people are able to distinguish between facts and opinions, verified posts and doctored images, they will also begin to understand which sources are credible.
Transparency from the government can bring credibility back to both state sponsored and independent media, leading to a shorter lifespan for conspiracy theories. Education and an open dialogue on and offline to air out miscommunications and conflicting accounts of history are necessary if we are to utilize the opportunities of freedom social media offers. Befekadu sites the controversy amid the death of AsamenewTsige and the allegedly foiled coup attempt. Immediately shutting down internet in the Gondar, Bahir Dar and other areas without mainstream media updates about the event, conspiracy theories spread. This misinformation continues to thrive today due to lack of transparency from the government bodies in question.
But the onus should be on big tech companies to provide context aware moderators to fairly overlook social networking sites, even if they do not have the financial incentive to do so.