The rise of the anti-viral viral campaign (Wired UK)


Shutterstock


A few days ago, the Islamic
State (IS) released a video already so infamous you probably know
what it is. Their “Message to America” was graphic, gruesome and
tragic: the beheading of American photojournalist James
Foley.  

In the way of thousands of gruesome videos, before it — from
executions in Iran to sniper-cams in Iraq and street fighting in
Ukraine — the video began to spread on YouTube. Copies
started to multiply, and the viewing count climbed quickly. It’s no
surprise that it did — gory, terrifying and outrageous videos are,
along with those we find funny, the ones we are most likely to share. YouTube is full of
both.  

The video of James Foley — as with countless others — broke
YouTube’s community guidelines and YouTube actively tried to remove
it. But 100 hours of footage is uploaded to YouTube every
minute, and any video, coming from thousands of different accounts
and in just as many slightly different forms, is hugely difficult
to truly eradicate from any corner of that massive platform. The
best YouTube can hope to do is to perform a fairly effective
whack-a-mole as new copies surface. And even if YouTube manages,
there are other platforms, like Diaspora, which either won’t be
able, or won’t want to. Simply limiting access to hosted
information is, in the information age, remarkably tough if not
outright impossible. 

The police warned that sharing, or even viewing the video might
be a crime under terrorism laws. But banning something — whether
books, food or videos — often makes us want it more. The scale of
samizdat books
in Soviet Russia was enormous. Penguin even has a collection of
previously banned books. Today, some things go viral because they
are banned. Some videos claim to be banned even when they are
certainly not. Being censured or censored by the authorities is
often practically a marketing win.    

But something strange happened this week, something genuinely
out of the ordinary of the social media routine: people started
calling on others not to share the IS video. A hashtag began to
rise — #ISISMediaBlackout — and as people flocked to it, a
collective sense began to grow: that as each person, horrified,
fascinated, clicked on, linked to or Tweeted the video, they were
handing IS a propaganda coup; a possibly important victory for that
group in energising their supporters and terrifying their
enemies. 

As #ISISMediaBlackout gathered pace it became one of the world’s
first anti-viral virals.  Tweeted around 20,000 times, it
managed to eclipse the content itself. In the world of media times
like this, when the calls for not sharing something drown out the
content itself, are genuinely rare. As the front pages of
mainstream newspapers around the world were filled with images,
many people on social media said, simply: no.  

This could be an important moment. Whilst social media has now
emerged as a significant new social and political space, it isn’t a
very moral space. From cyber-bullying to online trolling, misogyny
and hate speech, many of the significant problems we’re currently
grappling with on social media come down to the basic fact that we
don’t treat one another very well online. Our digital selves are
more aggressive, more abusive, and far less likely to consider what
our behaviour does to others. What we most often share is an
important part of this: eye-catching, hilarious, or horrific
content, but often not either factually true, or
responsible. 

#ISISMediaBlackout might be a sign that social media is taking
the step forwards that it most badly needs. More important than any
new feature rolled out by Twitter or Facebook, social media needs
to mature morally. It needs digital citizenship, the realisation
that our behaviour online affects and possibly harms others, and
especially that the choice of what we share is morally significant.
  

It’s too early to tell whether this crucial moment will coalesce
into something more longstanding and, instead of simply flocking to
the latest gruesome video because it satiates our morbid interest,
we will more often pause and think of the wider implications of
doing so. But whether it does or not, for a moment the moral
outrage committed by IS reminded us that we are moral beings in the
digital world. Let’s hope that we don’t need any more tragedies to
remember that. 

This is a guest post from Carl Miller, cofounder and
Research Director of the Centre for the Analysis of Social Media
(CASM) at Demos. You can
follow him on Twitter: @carljackmiller

If the article suppose to have a video or a photo gallery and it does not appear on your screen, please Click Here

22 August 2014 | 6:46 pm – Source: wired.co.uk

Leave a Reply

Your email address will not be published.