Deepfakes are rapidly becoming easier and quicker
to create and they’re opening a door into a new form of cybercrime. Although
the fake videos are still mostly seen as relatively harmful or even humorous,
this craze could take a more sinister turn in the future and be at the heart of
political scandals, cybercrime, or even unimaginable scenarios involving fake
videos – and not just targeting public figures.
A deepfake is the technique of human-image synthesis
based on artificial intelligence to create fake content either from scratch or
using existing video designed to replicate the look and sound of a real human.
Such videos can look incredibly real and currently many of these videos involve
celebrities or public figures saying something outrageous or untrue.
New research shows a huge increase in the creation of
deepfake videos, with the number online almost doubling in the last nine months
alone. Deepfakes are increasing in quality at a swift rate, too. This video
showing Bill Hader morphing effortlessly between Tom Cruise and
Seth Rogan is just one example of how authentic these videos are looking, as
well as sounding. Searching YouTube for the term ‘deepfake’ it will make you
realize we are viewing the tip of the iceberg of what is to come.
In fact, we have already seen deepfake technology
used for fraud, where a deepfaked voice was reportedly used to scam
a CEO out of a large sum of cash. It is believed the CEO of an unnamed UK firm thought he was on the
phone to the CEO of the German parent company and followed the orders to
immediately transfer €220,000 (roughly US$244,000) to a Hungarian supplier’s
bank account. If it was this easy to influence someone by just asking them to
do it over the phone, then surely we will need better security in place to
mitigate this threat.
Fooling the naked eye
We have also seen apps making deepnudes turning
photos of any clothed person into a topless photo in seconds. Although,
luckily, one particular app, DeepNude, has now been taken offline, what if this comes back in another form with a
vengeance and is able to create convincingly authentic-looking video?
There is also evidence that the production of these
videos is becoming a lucrative business especially in the pornography industry.
The BBC says “96% of these videos are of female celebrities
having their likenesses swapped into sexually explicit videos – without their
knowledge or consent”.
A recent
Californian bill has
taken a leap of faith and made it illegal to create a pornographic deepfake of
someone without their consent with a penalty of up to $150,000. But chances are
that no legislation will be enough to deter some people from fabricating the
videos.
To be sure, an article from the Economist discusses that in order to make a convincing
enough deepfake you would need a serious amount of video footage and/or voice
recordings in order to make even a short deepfake clip. I desperately wanted to
create a deepfake of myself but sadly, without many hours of footage of myself,
I wasn’t able to make a deepfake of my face.
Having said that, in the not-too-distant future, it
may be entirely possible to take just a few short Instagram stories to create a
deepfake that is believed by the majority of one’s followers online or by
anyone else who knows them. We may see some unimaginable videos appearing of
people closer to home – the boss, our colleagues, our peers, our family.
Additionally, deepfakes may also be used for bullying in schools, the office or
even further afield.
Furthermore, cybercriminals will definitely use
this technology more to spearphish victims. Deepfakes keep getting cheaper to
create and become near-impossible to detect with the human eye alone. As a
result, all that fakery could very easily muddy the water between fact and
fiction, which in turn could lead us to not trust anything – even when
presented with what our senses are telling us to believe.
Heading off the very real threat
So, what can be done to prepare us for this threat?
First, we need to better educate people that deepfakes exist, how they work and
the potential damage they can cause. We will all need to learn to treat even
the most realistic videos we see that they could be total fabrications.
Secondly, technology desperately needs to develop
better detection of deepfakes. There is already research
going into it, but it’s
nowhere near where it should be yet. Although machine learning is at the heart
of creating them in the first place, there needs to be something in place that
acts as the antidote being able to detect them without relying on human eyes
alone.
Finally, social media platforms need to realize
there is a huge potential threat with the impact of deepfakes because when you
mix a shocking video with social media, the outcome tends to spread very
rapidly and potentially could have a detrimental impact on society.
Don’t get me wrong; I hugely enjoy the development
in technology and watching it unfold in front of my eyes, however, we must
remain aware of how technology can sometimes detrimentally affect us,
especially when machine learning is maturing at a rate quicker than ever
before. Otherwise, we will soon see deepfakes become deepnorms with
far-reaching effects.