Loki Foundation

RightsCon Online 2020 Loki Foundation team personal highlights

RightsCon 2020: Our favorite sessions

In 2020, RightsCon made history — it was the first time the conference was hosted fully online. While we were shattered that we couldn’t go and see everyone in person in San José, it only makes us more excited to land in Costa Rica for RightsCon in 2021!

Although all-online conferences do throw some spanners into the works, one thing that many RightsCon goers noticed this year was the enormous breadth of attendees and panelists that were able to attend digitally — including many who wouldn’t ordinarily  have been able to travel for the conference. 

This was true for us as well! Lots of people from the Loki team were able to tune in this year, and we spent the whole week chatting to each other about the fresh, informative, and revelatory RightsCon sessions we were attending. 

Being the middle of winter here Melbourne, sometimes this meant sitting up in the middle of the cold dark night with a rug draped over your shoulders to catch a fireside chat with Sir Tim Berners Lee — but it was worth it! 

Here are some of the personal highlights from the members of the Loki team that attended RightsCon this year.

Internet shutdowns have become a commonplace strategy employed by governments that want to limit or squash freedom of expression. The panel Internet shutdowns in the Arab region: a challenge to defeat discussed the ways governments are using internet shutdowns to stifle freedom of expression and the ability for activists to organise protests effectively. Hayder Hamzoz, founder of the Iraqi Network for Social Media (INSM), described the situation in Iraq in October 2019, when the internet was shut down for an extended period of time during anti-government protests which led to over 700 deaths and more than 20,000 young people injured.  

This panel was also insightful in terms of better understanding the internet landscape in the wider MENA region. Many MENA governments perceive social media as a threat because social platforms can enable people to express their views to wide audiences. Consequently, shutdowns have been used as a tool to attempt to control the public.

Many countries in the region experienced regular short-term internet shutdowns during elections, as well as shutdowns during national school exams in attempts to prevent cheating.  In effect, internet shutdowns have become normalised. However, the shutdown during the protests in Iraq was of unprecedented length.

Hamzoz went on to discuss how information flows were maintained in ‘low tech’ ways during the Iraq shutdown. A local radio station was established inside the protest area and loudspeakers were used to provide important information that would normally have been communicated through social media and messaging platforms. Some journalists made a 5 hour road-trip north to the autonomous Kurdistan region where the internet was still accessible. Internet was also available through international roaming services, and while such services are expensive,  it was still possible to connect with the outside world using a roaming SIM.

A second panel of interest was Censoring without getting caught: challenges around measuring and advocating against bandwidth throttling. When governments throttle internet connectivity, they can easily blame connectivity issues on other network disruptions such as technical failures or high levels of internet usage. Lai Yi Ohlsen, Project Director at mLab — an organisation that monitors network traffic and shutdowns — described how mLab has needed to adapt their methodology to identify intentional throttling efforts by governments by comparing historical data, and by considering the political and social contexts of locations experiencing throttling.  

Mishi Choudry, legal director for Software Freedom Law Center, India (SFLC.in), described the situation in Kashmir, where there was a full shutdown for 7 months until a Supreme Court ruling forced the government to re-establish connectivity. The government responded by limiting connectivity to 2G speeds. From a legal standpoint, connectivity was re-established — but accessing and using most websites or services over such a limited network is essentially impossible. 

With throttling and similar strategies growing more common around the world, advocates who fight for a free and open internet may struggle to prove that shutdowns are intentional. The need for collaboration between activists and advocacy groups has never been greater — we must work together to hold governments to account when they try to use shutdowns to control their people.

End-to-end encryption. I must have explained it a million times:  in conference keynotes, at the pub, to my grandma. So when I saw Will Cathcart, the head of WhatsApp, speaking about encryption at RightsCon, it was an instant RSVP. 

Although Will gave a brief E2EE explainer, this chat was really about the various threats against encryption. These days, people love to act like encryption is a villain. WhatsApp, as the most widely used encrypted messenger in the world, is smack-bang in the middle of this discussion — and Session, our encrypted messenger, is a part of it too. 

As Will explained, this argument about encryption isn’t actually new. It seems new, but really this is an age-old debate about privacy. The question is: Do you have the right to have a private conversation? Most people would say yes, nobody would think twice about you and your friend from stepping into a private room to have a private conversation. 

In fact, it is commonplace for the most sensitive conversations to be conducted in this way. Doctors talking with their patients, lawyers consulting with clients, heart to hearts with your partner. Nobody would feel comfortable having a camera and microphone in their doctor’s surgery, lawyer’s office, or bedroom feeding information back to their government. Technology is at a level of ubiquity that this is entirely feasible, but it would be ardently rejected — and we should reject attacks on encryption just as fervently. 

Will also spoke about how governments around the world  are attempting to regulate encryption. He said a lot of places have skirted around the issue, instead of outright banning encryption, they’re opting to ramp up the collection of other kinds of information which can be used to trace the origin of a message. This is, of course, a problem that Session tries to solve. He also said the efforts of countries like Australia, the UK, and the United States to ban or limit the use of encryption is emboldening smaller world powers to be more aggressive in trying to shut down encryption. 

This is a serious problem, because globalisation means anti-encryption regulation has a ripple effect on the whole world. If a democratic country with a benevolent government weakens encryption, they’re opening their own citizens to being surveilled by other states. Not to mention they might inadvertently limit the access to secure technology for people in those other states who really need it. 

Will’s chat really hit home because all the work we’ve done on Session is a part of this same fight for encryption. It’s not really about technology, it’s about rights. So far, we haven’t done a great job transitioning the rights we hold dear in the physical world into the  digital world — and encryption is going to be a pivotal part of that transition.

How can social media platforms work to protect journalists and activists around the world from hate speech and online harassment in a way that is contextually specific, technologically scalable, and also protects freedom of speech and expression? 

In Technology-Facilitated Hate Speech and Digital Activism, Kashaf Rehman (Bolo Bhi), Shmyla Khan (Digital Rights Foundation), Arzu Geybullayeva (Azerbaijan Internet Watch), and Alex Warofka (Facebook) discuss the very real threats faced by those who challenge existing power structures using social media and online forums, and the challenges platforms face in adequately responding.

Social media has created space for activists and social movements to mobilise and organise themselves online, though it has equally created a space for detractors to undermine these efforts. Platforms like Facebook say their goal is for people to be able to exercise their freedom of expression rights, though they acknowledge that people can’t be free to express themselves if they don’t feel safe online. 

Although social media platforms work to create community guidelines and definitions around hate speech in order to protect users, these guidelines are often static, don’t take into account cultural context, and are often slowly enforced leaving targets in vulnerable situations. In Facebook’s case, its definition of “hate speech” is applied equally to its users all around the globe, which it says is necessary given different jurisdictions define “hate speech”  in a way which Facebook feels excludes — or even targets — minority groups.

But aside from that, is it even possible to define a set of standards that apply globally without infringing of freedom of expression? There is a range of offensive and distasteful speech that is still counted as protected expression under Article 19 of International Covenant on Civil and Political Rights. So how and where should we draw the line? The Rabat Test seems to be the closest we’ve come, though applying it at scale is a different problem, especially for platforms with hundreds of millions of users.

With free expression and open discourse around sensitive and controversial issues necessary for democracy, some governments have realised the effectiveness of online harassment in silencing and discrediting dissenting voices, automating the process through bots and fake accounts.

So how do we move forward? One suggestion is for social media platforms to work with civil society and activists to provide their accounts with additional security measures against hacking and impersonation attempts. Another was to look beyond “hate speech” guidelines, where there are frequent disconnects between protections on paper and what happens in reality. 

The biggest takeaway was that hate speech and harassment guidelines need to be in constant development, though the big question for me is how social media platforms can make these specific and relevant to cultural context when users interact with each other cross-borders.

2020 is a year of many major elections — including in my home countries of New Zealand and the United States. Seeing as I’ll be filling out two ballots this year, it seemed like as good a time as any to take a look at the panel Election year: assessing the global human rights impacts of political misinformation in 2020, hosted by Nnenna Nwakanma from the World Wide Web Foundation. 

The spotlight has been shining brightly on political misinformation since the ‘16 US elections. Since then, misinformation has plagued elections all around the world and become one of the foremost threats against modern democracies. It has become clear that any society, no matter its institutional, constitutional, or judicial protections, can be destabilised by political disinformation. 

The part of this discussion that I found most interesting was around micro-targeting. In that infamous ‘16 election, the Trump campaign served 5.9 million ad variations in just six months. That’s million with an M. The purpose of these ad variants is clear — many of them have contradictory or even outright misleading content, and different ads are specifically shown to different users. 

In the panel, the main example being used was Facebook. In many instances, these ad variations are targeting users based on information they don’t even know Facebook has about them. This is extremely granular, specific information — like whether or not they gamble, whether or not they have high school educations, if they are employed, and many other behavioural categories. This level of micro-targeting can lead to two different voters having extremely divergent opinions about who candidates are and what they stand for.

The panel’s message to Facebook was this: Your responsibility must live up to your influence. 

It’s clear Facebook has the power to sway entire elections, and if you’re going to allow this incredible amount of personal data to be used to influence people, you have to 

Of course, here at the Loki Foundation — our mission is to reduce that data completely, to cut off the head of the snake. If Facebook can’t be trusted to act in the best interest of society, taking measures to protect private data — either through regulation or behaviour — can help to mitigate the damage being done by big blue data. 

Sir Tim Berners-Lee is a certified hero of the internet: he invented the World Wide Web. That’s right — the very thing you’re using to read this article right now. Tim was joined by Berhan Taye, a Senior Policy Analyst and the Global Internet Shutdowns Lead at Access Now for Fireside Chat: Sir Tim Berners-Lee, inventor of the World Wide Web

Of course, RightsCon was full of incredible chats — but how could I pass this one up? 

Recently, we ticked over 50 per cent of the world being internet-connected. That is a huge achievement, but it introduces a problem which Sir Tim Berners-Lee calls the digital divide. The idea is that as internet-connectivity becomes more commonplace, those without it become more and more disadvantaged. Companies and governments are providing services that are online only, and lots of information can’t reasonably be accessed without an internet connection. While, of course, more people having internet access is generally thought of as a good thing — it means we need to be more and more conscious of the digital divide. 

Currently, it’s things like devices costs and infrastructure development that limit internet access. But even if 95 per cent of the world is internet-connected, those last 5 per cent might be facing hurdles like malicious governments or low literacy levels — problems which most people would say the internet can help solve. 

For the most part, governments have been rushing to hook up the internet in their countries — internet access drives commerce, education, and many other factors which can be used to measure the success of the government itself. But as Berhan Taye pointed out, while governments do spend billions providing the internet to their citizens, they will just as quickly turn it off for political gain. As Taye so succinctly put it, internet shutdowns don’t happen on any ordinary Tuesday. They come when you’re protesting, fighting for your rights or for your life — it’s during critical national events that the internet goes dark. To hide dissent. To hide human rights violations. 

The reality is that whether you’re connected to the internet lies mostly in the hands of governments and ISPs. Sir Tim Berners-Lee spoke about some ways to combat this dependence, such as mesh networking, which is why the work of the Loki Foundation is so important. Making online services more decentralised makes them more resilient. Slowdowns and shutdowns won’t be as effective, and the people calling for help will be heard loud and clear all around the globe. 


RightsCon Online 2020 was a fantastic experience, and the Loki team thoroughly enjoyed engaging with the fascinating depth and breadth of knowledge on offer across the panels, labs, and workshops we attended. There are a vast array of challenges and digital rights dangers to combat over the course of the decade to come, but RightsCon proves that if we — activists, digital rights defenders, developers, and others in the community — can come together and share ideas, we will be ready to build a brighter, freer future in 2020 and beyond.