AI
has
arrived,
and
it’s
already
changing
things
in
the
world
of
crypto.
Coders
use
it
to
code,
researchers
use
it
to
research
and,
unfortunately,
scammers
use
it
to
scam.
That
is
the
finding
of
a
new
report
by
blockchain
analytics
firm
Elliptic
about
the
emerging
risks
of
AI
in
perpetuating
criminal
use
of
crypto.
This
is
an
excerpt
from
The
Node
newsletter,
a
daily
roundup
of
the
most
pivotal
crypto
news
on
CoinDesk
and
beyond.
You
can
subscribe
to
get
the
full
newsletter
here.
Note:
The
views
expressed
in
this
column
are
those
of
the
author
and
do
not
necessarily
reflect
those
of
CoinDesk,
Inc.
or
its
owners
and
affiliates.
“The
rise
of
artificial
intelligence
has
shown
huge
potential
for
driving
innovation,
not
least
within
crypto.
However,
as
with
any
emerging
technology,
there
remains
a
risk
of
threat
actors
seeking
to
exploit
new
developments
for
illicit
purposes,”
the
report
reads.
While
the
risk
right
now
remains
small,
the
firm’s
researchers
did
identify
five
“typologies”
where
AI
is
already
being
deployed
in
nefarious
ways.
These
include
in
creating
and
disseminating
deepfakes
to
make
more
convincing
scams,
building
AI-scam
tokens
to
capitalize
on
hype,
using
large
language
models
to
devise
hacks,
spreading
disinformation
and
making
more
convincing
phishing
websites/prompts
to
facilitate
identity
theft.
Awareness
of
these
new
(or
frankly,
old,
but
now
supercharged)
scams
means
that
users
can
stay
ahead
of
the
curve.
That
means
crypto
users
should
become
more
familiar
with
the
most
common
types
of
crypto-related
scams.
CoinDesk
has
a
good
report
on
that
front
here
covering
all
the
basics
like
social
media
scams,
Ponzi
schemes,
rug
pulls
and
“romance
scams”
(now
often
referred
to
as
“pig
butchering”).
“The
reason
there
is
no
easy
way
to
deal
with
the
problem
is
because
it’s
really
multiple
problems,
each
with
its
own
variables
and
solutions,”
Pete
Pachal,
author
of
the
excellent
Media
CoPilot
Substack,
wrote
in
a
recent
piece
about
deepfakes,
AI
and
crypto.
According
to
Pachal,
who
recently
spoke
at
a
Consensus
2024
session
called
“From
Taylor
Swift
to
the
2024
Election:
Deepfakes
vs.
Truth,”
deepfakes
have
become
increasingly
difficult
to
spot
as
AI
image
generation
has
improved.
For
instance,
earlier
this
month
a
video
circulated
on
social
media
of
fake
Elon
Musk
promoting
fake
trading
platform
Quantum
AI
that
promised
users
fake
returns
that
apparently
tricked
more
than
a
few
people.
Instances
like
these
are
likely
only
going
to
grow.
Verification
company
Sumsub
claims
that
crypto
was
“the
main
target
sector”
for
almost
90%
of
deepfake
scams
detected
in
2023.
While
it’s
unclear
how
effective
these
scams
were,
the
FBI’s
online
crime
report
found
crypto
investment
losses
in
the
U.S.
grew
53%
to
$3.9
billion
last
year.
However,
it’s
worth
noting
that
oftentimes
instances
of
fraud
in
the
crypto
industry
are
just
incidentally
related
to
crypto,
because
it
just
happens
to
be
a
topic
that
draws
a
lot
of
attention
and
is
often
complicated
for
people
not
steeped
in
the
culture.
As
CFTC
Commissioner
Summer
Mersinger
told
CoinDesk:
“I
think
it’s
a
little
unfair
because
a
lot
of
these
cases
are
just
run
of
the
mill
fraud;
somebody
stealing
someone
else’s
money,
someone
claiming
to
buy
crypto,
but
not
actually
buying
the
crypto.
So
we’ve
seen
this
play
out
with
whatever
the
hot
topic
is
at
the
time.”
If
there’s
any
consolation,
it’s
that
images,
video
and
text
generated
by
AI
are
still
relatively
easy
to
notice
if
you
know
what
to
look
for.
Crypto
users
in
particular
should
be
vigilant,
considering
how
common
it
is
for
even
high-profile
figures
to
get
tricked
by
social
engineering
schemes
or
malicious
scripts.
MetaMask
builder
Taylor
Monahan
has
sage
advice
here:
always
know
you’re
a
potential
target,
and
actually
verify
what
you’re
clicking
on
is
what
it
purports
to
be.
Crypto
is
already
a
low-trust
environment,
just
given
the
nature
of
the
technology.
And
it
might
get
even
lower.