On 29 July a House of Lords committee published a report calling for social networks to start
demanding people register using their real names. This would be
designed to tackle forms of online crime, including “revenge porn”.
The report did not call for any new laws, but rather guidance,
saying that existing legislation should be enough to tackle these
Lilian Edwards, professor of internet law at the University of
Strathclyde and advisor to the Open Rights Group and the Foundation
for Internet Privacy Research, was called to provide the committee
with expert advice. Here, she talks to Wired.co.uk about why she
thinks a real name social media policy will never work (and could
lead to a police state), why police have struggled with online
harassment to date, and how we can overcome these types of
criminality together, as a society.
Do you think it’s right the onus is on private companies
to check identities?
Unfortunately this is the biggest issue where the Lords and I
disagreed! Personally I do think private companies should take some
responsibility to maintain standards of civilised interaction on
their own sites – I think sites should have responsibilities to
look at and take down obviously abusive images or text on request
— they cannot hide behind saying that it is up to users to help
But no, I
don’t think a real names mandatory policy should be applied by
law. I think people have a right to speak online about, for
example, their politics, their sexuality, their health, their love
life, without constantly worrying that at some future point a
government/employers/divorce courts/insurers may get access to
their true identity.
How should we balance these rights, with police investigations
and crime prevention measures?
We already have in the UK ways the police can ask networks for the
identities of subscribers via RIPA warrants, and I think that’s a
decent balance, (though it would be better still if those
requests had to be judicially authorised). But mandating social
networks to demand real names to better stop or solve crimes would
be pointless, because those who wanted to commit antisocial acts
would simply lie — there is no equivalent of an online digital
passport in either the UK or the US, and I think that’s right.
Because setting up such an ID card that might then have to be used
when paying, when shopping, when associating online, is a license
for a police state.
Given so many internet companies are based in the US,
how would such a regulation in the UK be practical
The US would simply never impose such a requirement on adults
given the protection of free speech, especially political speech.
So if the UK passed such laws it would be putting itself into the
very embarrassing position of trying to impose unenforceable laws
on US based sites — especially Twitter, which has remained firm on
a pro-anonymity policy. Even Facebook, which claims to have a real
name policy, admitted in its IPO accounts that around three million
of its accounts were fake or pseudonymous. South Korea tried
to mandate real names policies and it ended in a humiliating
climb-down with the Supreme Court there finding it unconstitutional
and an enormous security breach, which was made much worse by the
real names policy.
The US certainly has a history of resisting enforcement of
foreign laws, which it sees as breaching fundamental US guarantees
of free speech — see the France v
Yahoo! case and the recent SPEECH (Securing the Protection of
our Enduring and Established Constitutional Heritage) Act rejecting
enforcement of foreign libel judgments. The worst-case
scenario is that the UK tries to block foreign websites that don’t
comply with the real name policy — that would be very unproductive
The Lords report suggests that parents and teachers make
children aware of what is acceptable online behaviour. Is this
Parental controls for children — which the report is not
concerned with — are something of a bandaid. If we want to change
a culture of abusive speech online we need to persuade the
next generation this is simply as unacceptable as say, abuse in a
public bar or defecating in public — beyond the pale. I think
current policy accepts this. Simply blocking kids from every type
of social media means they’ll go crazy when they get access, or
more likely, find ways round the blocks.
Why is guidance, as opposed to a new law, appropriate
Guidance already exists around malicious and threatening
communications. The Director of Public Prosecutions brought in
guidance for prosecutors in 2012 when there was concern that
existing laws might be being used for over-broad prosecution of
speech online, with a chilling effect on freedom of speech. The
notorious “Twitter Joke Trial” being a key example. Figures the Director
of Public Prosecutions gave in evidence during the writing of the
report showed that guidance has been productive –prosecutions
dropped by about two thirds over two years.
What the report is calling for is one more specific bit of
guidance. Currently the law criminalises “grossly offensive”
communications. It’s not clear if this covers revenge porn.
Because we live in a liberal democracy, generally images of
consensual sex are not seen as obscene or indecent for the Obscene
Publications Act. But revenge porn is different because there was
consent when the image was taken but not when it was distributed
later. What we could do with is clarification if that makes
the images “grossly offensive”. The way law works you don’t ever
truly know until there’s a case, so guidance would be helpful.
If we don’t need a new law, why has the police been
unable — or unwilling — to prosecute people in many
There are a lot of reasons why police have struggled with social
media offences: sheer volume, the novelty, the fact the police are
often themselves not familiar with these sites and so may not take
the abuse seriously, or know how to investigate, or what evidence
needs collected. There are very many possible laws applicable –
laws on harassment, threats, communications and even copyright
–which is confusing for everyone. We need more training both
for police and prosecutors in understanding how crime operates in
the online and social media environments.
Do you think the report has given a balanced conclusion
— what more should we be doing?
I think it has and I think its strength is that it admits these
are tentative conclusions because we really don’t know very much
yet about social media crimes. We don’t know why people troll
online or abuse women or what would make then stop; we don’t know
if it’s connected to general criminality or simply a one off
phenomenon; and we have no idea if jail is the right response or if
something like restorative justice, where the victim meets the
perpetrator and they realise she is a real human being, might work
better. We have almost no research and what there is is mostly
about children not adults. We desperately need more empirical work,
thought, and common sense in this area, not a kneejerk rush towards
new criminal laws which grab headlines but solve nothing.
Disclaimer: Lilian Edwards was the Specialist
Adviser to the Lords Select Committee on the communications report
on social media and criminal law. However the views above are hers
and hers alone.