Security
expert
Ari
Juels
has
been
thinking
about
how
technology
can
derail
society
for
about
as
long
as
he
can
remember.
That
doesn’t
mean
Juels,
the
chief
scientist
at
Chainlink
and
professor
at
Cornell
Tech,
in
New
York
CIty,
thinks
the
world
is
going
off
the
rails
anytime
soon.
But
over
the
past
decade
—
with
the
development
of
large
language
models
that
back
increasingly
powerful
artificial
intelligence
systems
and
autonomous,
self-executing
smart
contracts
—
things
have
begun
to
trend
in
a
more
worrying
direction.
This
is
an
excerpt
from
The
Node
newsletter,
a
daily
roundup
of
the
most
pivotal
crypto
news
on
CoinDesk
and
beyond.
You
can
subscribe
to
get
the
full
newsletter
here.
There
is
a
“growing
recognition
that
the
financial
system
can
be
a
vector
of
AI
escape,”
Juels
said
in
an
interview
with
CoinDesk.
“If
you
control
money,
you
can
have
an
impact
on
the
real
world.”
This
doomsday
scenario
is
the
jumping
point
for
Juels’
second
novel,
“The
Oracle,”
a
crime
thriller
published
by
heavyweight
science
fiction
imprint
Talos,
about
a
NYC-based
blockchain
researcher
enlisted
by
the
U.S.
government
to
thwart
a
weaponized
crypto
protocol.
Set
in
the
near
future,
readers
might
see
some
familiarities
to
today.
Take
the
protagonist’s
research
into
smart
contracts
that
can
go
rogue
—
and
kill
—
similar
to
Juel’s
own
2015
academic
paper
about
“criminal
smart
contracts.”
Or
references
to
Chainlink
CEO
Sergey
Nazarov’s
famous
plaid
shirt.
Others,
like
a
powerful
AI
tool
that
helps
computers
interact
with
and
interpret
the
world,
like
OpenAI’s
ChatGPT,
only
came
online
after
Juels
started
writing.
See
also:
The
Man
In
Plaid
Thankfully,
fiction
sometimes
is
stranger
than
reality,
and
the
prospects
of
smart
contracts
programmed
to
kill
remain
a
distant
threat,
Juels
said.
He
said
he
remains
cautiously
optimistic
that
if
people
start
thinking
about
the
risks
today,
and
design
guardrails
like
blockchain-based
oracles
(essentially
feeder
systems
for
information),
it
could
help
prevent
problems
in
the
long
run.
CoinDesk
caught
up
with
Juels
last
week
to
discuss
the
burgeoning
intersection
of
blockchain
and
AI,
the
ways
things
can
go
off
the
rails
and
what
people
over-
and
under-rate
about
technology.
Are
smart
contracts
like
the
ones
in
“The
Oracle”
possible
today?
They’re
not
possible
with
today’s
infrastructure,
but
are
possible
or
at
least
plausible
with
today’s
technology.
What’s
your
timeline
for
when
something
like
the
events
of
the
book
could
play
out?
It’s
a
little
hard
to
say.
At
least
a
few
years.
What
makes
them
technologically
plausible
now,
when
in
fact,
they
weren’t
at
the
time
I
started
writing
the
novel,
is
the
advent
of
powerful
LLM
[large
language
models]
because
they’re
needed
essentially
to
adjudicate
with
what
the
novel
calls
a
rogue
contract.
The
rogue
contract
was
soliciting
a
crime,
in
this
case,
the
death
of
the
hero
of
the
novel,
and
somehow
a
determination
has
to
be
made
as
to
whether
or
not
the
crime
occurred
and
who
was
responsible
for
it
and
should
therefore
receive
a
reward.
To
do
those
two
things,
you
need
something
to
extract
keywords
from
news
articles
—
basically
an
LLM
plugged
into
blockchain
infrastructure
and
inheriting
the
same
properties
that
smart
contracts
have,
namely
the
fact
that
they
are,
at
least
in
principle,
unstoppable
if
they’re
coded
to
behave
that
way.
Do
you
need
a
blockchain
to
build
smart
contracts?
It
depends
on
the
trust
model
you’re
after.
In
a
sense,
the
answer
is
no,
you
could
run
smart
contracts
in
a
centralized
system.
But
then
you
would
lose
the
benefits
of
decentralization,
namely,
resilience
to
the
failure
of
individual
computing
devices,
censorship-resistance
and
confidence
that
the
rules
aren’t
going
to
change
out
from
under
you.
This
may
be
a
weird
question,
but
I
figured
you
might
get
it:
Are
blockchains
Apollonian?
That
is
a
weird
question,
and
I
do
get
it.
I
would
not
say
blockchains
in
general,
but
oracles
definitely.
As
you
may
know
the
novel
is
about
not
just
modern
day
oracles
but
also
the
Oracle
of
Delphi.
They’re
both
aimed
to
serve
as
definitive
sources
of
truth,
in
some
sense.
One
is
literally
powered
by
the
god
Apollo,
at
least
in
the
belief
of
the
ancient
Greeks.
And
the
other
is
powered
by
authoritative
sources
of
data
like
websites.
So
if
you
take
that
perspective,
yes,
I
would
say
that
oracle
systems
are
kind
of
Apollonian
in
nature
because
Apollo
was
the
God
of
truth.
Is
blockchain
privacy
sufficient
today?
All
technology
features
a
double
edged
sword.
There
are
obviously
good
and
important
facets
to
privacy.
You
can’t
have
a
truly
free
society
without
privacy.
People’s
thoughts,
at
the
minimum,
need
to
remain
private
for
people
to
act
freely.
But
privacy
can
be
abused.
Criminal
activities
can
make
use
of
blockchain
technology.
But
I
would
say
that
we
today
don’t
yet
have
powerful
enough
privacy-preserving
tools
to
provide
users
with
the
benefits
of
privacy
that
I
think
they
deserve.
See
also:
Code
Is
Not
(Always)
Law
|
Opinion
Would
you
say
technology
as
a
whole
is
a
generally
positive
force?
There
are
clear
benefits
to
technology.
We’ve
come
a
very
long
way
toward
eradicating
global
poverty,
that’s
one
of
the
good
news
stories
that
people
tend
to
overlook.
But
there
have
been
costs
to
the
use
of
new
technologies
—
that
becomes
visible
when
you
look
at
the
general
happiness
or
contentment
of
those
in
rich
Western
nations,
which
has
stagnated.
That
can
be
accounted
for
as
a
side
effect
of
technology,
in
part.
There
are
other
factors
at
play,
including
a
breakdown
in
social
cohesion
and
feelings
of
loneliness,
but
technology
has
been
somewhat
responsible
for
that.
One
of
the
reasons
I
incorporated
the
ancient
Greek
dimension
[in
The
Oracle]
was
that
I
feel
one
of
the
things
we’re
losing
as
a
result
of
the
pervasiveness
of
technology
is
a
certain
sense
of
awe.
The
fact
that
we
have
the
answers
to
most
of
the
questions
we
would
naturally
pose
with
Google
or
AI
agents
at
our
fingertips
means
a
diminishment
of
our
sense
of
wonder
and
mystery
with
which
we
used
to
be
encompassed.
There’s
less
room
for
us
in
our
daily
lives
to
explore
intellectually.
You
have
to
dig
deeper,
if
that
makes
sense.
It’s
a
beautiful
idea.
The
World
Wide
Web
doesn’t
contain
the
sum
total
of
human
knowledge,
but
it
is
a
significant
chunk
of
it.
Yet
we
use
it
mostly
to
indulge
our
base
desires.
We’ve
been
given
this
incredible
gift.
And
it’s
surprisingly
hard
for
us
to
appreciate.
What
are
we
overreacting
about
when
it
comes
to
technology?
I
tend
to
be
somewhat
optimistic
when
it
comes
to
AI
doomsday
scenarios.
I’m
by
no
means
a
subject
matter
expert
here,
but
I
have
studied
information
security
for
quite
a
long
period
of
time.
And
the
analogy
I
like
to
draw,
and
I
hope
it
holds
good,
is
to
the
Y2K
bug.
The
doomsday
scenarios
that
people
envisioned
didn’t
happen.
There
wasn’t
a
need
for
manual
intervention.
We
have
all
of
these
kinds
of
hidden
circuit
breakers
in
place.
And
so
I
feel
a
certain
degree
of
confidence
that
those
circuit
breakers
will
kick
in
if,
say,
an
AI
agent
goes
rogue.
This
provides
me
at
least
with
a
certain
degree
of
comfort
and
optimism
around
the
future
of
AI.
I’ve
noticed
that
most
of
your
scholarly
writing
is
normally
co-authored.
Was
it
a
big
shift
writing
alone?
Yes,
it
was
a
huge
shift
in
a
number
of
ways
from
writing
a
scholarly
paper.
One
is
that
when
you
write
a
scholarly
paper,
the
language
basically
has
to
be
dry,
otherwise,
the
paper
is
likely
to
be
rejected
by
peers.
A
funny
story
about
that.
In
1999,
I
co-authored
a
paper
—
actually
it
was
the
paper
that
proposed
the
term
proof-of-work
[the
consensus
mechanism
behind
Bitcoin].
We
cited
a
popular
cookbook,
“The
Joy
of
Cooking,”
because
the
title
of
the
paper
included
the
word
bread
pudding
and
we
wanted
a
reference
to
explain
what
it
was.
A
reviewer
wanted
to
reject
the
paper
because
he
felt
this
reference
wasn’t
appropriately
scholarly.
That’s
the
type
of
milieu
in
which
you
operate
in
academia.
But
more
importantly,
when
you’re
working
alone
on
a
project
of
this
type,
it
gives
much
freer
rein
to
the
imagination.
That’s
one
of
the
nice
things
about
collaborating
with
other
people
and
one
of
the
reasons
I
do
it
—
it
can
help
you
formulate
ideas,
but
also
act
as
a
check
on
ideas
that
don’t
don’t
work
or
don’t
make
sense.
In
the
case
of
fiction,
within
loose
limits,
there’s
no
such
thing
as
an
idea
that
doesn’t
work
or
doesn’t
make
sense.
Do
you
have
any
unusual
work
techniques,
coming
from
someone
who
teaches
in
the
Ivy
League,
does
research
for
Chainlink
and
writes
in
his
spare
time?
It
depends
on
the
set
of
projects
I’m
juggling.
The
thing
that
was
helpful
when
I
was
trying
to
squeeze
in
time
for
the
book
was
that
I
was
obsessive
about
writing
it.
It
was
a
real
flow
process.
The
Hungarian
psychologist
Mihaly
Csikszentmihalyi
explored
the
concept
of
flow,
defining
it
as
an
activity
in
which
you
can
maintain
a
unique
focus
over
an
extended
period
and
lose
track
of
time.
Writing
placed
me
in
a
flow
state.
I
squeezed
it
into
the
little
nooks
and
crannies
of
time
available
to
me.
Do
you
think
NFTs
are
overrated
or
underrated?
Bored
Apes
are
overrated.
Pointless
NFTs
are
overrated.
But
the
long
term
future
of
NFTs
is
perhaps
underrated.
In
some
sense,
it’s
a
new
artistic
medium
—
the
way
photography
was
in
the
19th
century.
Only
slowly
did
people
see
photography
as
a
real
artistic
medium.
In
the
long
run,
I’m
actually
pretty
bullish
even
though
I
haven’t
been
unable
to
convince
my
PhD
students
to
work
on
NFT
related
projects.
Interesting.
Does
interest
in
crypto
change
cohort
by
cohort?
It
changes
from
individual
to
individual.
Like
I
have
some
PhD
students
come
in
not
knowing
what
they
want
to
work
on
and
I
will
end
up
helping
them
set
up
a
research
direction.
And
others
know
from
day
one
exactly
what
they
want
to
do.
I
have
one
student
about
to
graduate
who
knew
that
he
wanted
to
work
on
DeFi.
That’s
basically
what
he’s
done
for
five
years
while
working
with
me.
I
see
the
role
of
a
PhD
advisor
as
helping
my
PhD
students
accomplish
whatever
it
is
they
want
to
accomplish.
Anything
else
you
wanted
to
say
about
the
book?
One
thing
I
do
want
to
emphasize,
an
important
message
for
the
community
at
large,
is
the
growing
recognition
that
the
financial
system
can
be
a
vector
of
AI
escape.
People
are
worried
about
AI
agents
escaping
from
their
confines
and
controlling
cyber
physical
systems
like
autonomous
vehicles,
power
plants
or
weapons
systems
—
that’s
the
scenario
they
have
in
mind.
I
think
they
forget
that
the
financial
system,
particularly
cryptocurrency,
is
especially
well
suited
to
control
by
AI
agents
and
can
itself
be
an
escape
vector.
If
you
control
money,
you
can
have
an
impact
on
the
real
world,
right?
See
also:
What
is
Chainlink?
The
question
is
how
do
we
deal
with
AI
safety
in
view
of
this
very
particular
concern
around
blockchain
systems?
The
book
has
actually
gotten
me
and
my
colleagues
at
Chainlink
thinking
about
how
oracle’s
act
as
gatekeepers
to
this
new
financial
system,
and
the
role
they
could
play
in
AI
safety.
Is
there
anything
tangible
in
mind
that
Chainlink
can
do
to
prevent
something
like
that?
This
is
something
I’ve
just
started
to
give
thought
to,
but
some
of
the
guardrails
that
are
already
present
in
systems
we
build
like
CCIP
or
cross
chain
bridges
would
actually
be
helpful
in
the
case
of
an
AI
escape
by
establishing
boundaries
for
what
a
malicious
agent
could
do.
That’s
a
starting
point.
But
the
question
is,
do
we
need
things
like
anomaly
detection
in
place
to
detect
not
just
rogue
human
activity
but
rogue
AI
activity?
It’s
an
important
problem,
it’s
actually
one
I’m
starting
to
devote
a
fair
amount
of
attention
to.