The following is a conversation with Michael Levin, his second time on the
podcast. He is one of the most
fascinating and brilliant
biologists and scientists I’ve
ever had the pleasure of speaking
with. He and his labs at Tufts
University study and build
biological systems that help us
understand the nature of intelligence,
agency, memory, consciousness, and life
in all of its forms here on Earth, and
beyond. This is the Lex Fridman
Podcast. To support it,
please check out our sponsors in the
description, where you can also find
links to contact me, ask
questions, give feedback, and so
on. And now, dear friends,
here’s Michael Levin.
You write that the central question
at the heart of your work from
biological systems to
computational ones is, “How do
embodied minds arise in the
physical world, and what
determines the capabilities and
properties of those minds?” Can you
unpack that question for us,
and maybe begin to answer it?
Well, the fundamental tension
is in both the first-person,
the second-person, and
third-person descriptions of mind.
So, in third-person, we want to
understand how do we recognize
them, and how do we know, looking out into
the world, what degree of agency there
is, and how best to relate to the
different systems that we find.
And are our intuitions any good when
we look at something and it looks
really stupid and mechanical,
versus it really looks like there’s
something cognitive going on there? How
do we get good at recognizing them?
Then there’s the second-person, which
is the control, and that’s both for
engineering but also for regenerative medicine,
when you want to tell the system to do
something, right? What kind of tools are
you going to use? And this is a major
part of my framework, is that all of these
kinds of things are operational claims.
Are you going to use the tools
of hardware rewiring, of control
theory and cybernetics,
of behavior science, of
psychoanalysis and love and friendship? Like,
what are the interaction protocols that you
bring, right? And then in first-person,
it’s this notion of having an inner
perspective and being a system that
has valence and cares about the
outcome of things. Makes decisions and
has memories and tells a story about
itself and the outside world. And
how can all of that exist and
still be consistent with the laws of physics
and chemistry and various other things that we
see around us? So that, that I find to
be maybe the most interesting and the
most important mystery for all
of us to both on the science
and also on the personal level.
So that’s what I’m interested in.
So your work is focused on
starting at the physics, going all
the way to friendship and love and
psychoanalysis.
Yeah, although, actually I would turn that
upside down. I think that pyramid is backwards,
and I think it’s behavior science at the
bottom. I think it’s behavior science all the
way. I think in certain ways,
even math is the behavior of
a certain kind of being that
lives in a latent space, and
physics is what we call systems
that at least look to be
amenable to a very simple, low
agency kind of model, and so
on. But that’s what I’m interested
in, is understanding that and
developing applications. Because
it’s very important to me
that what we do is
transition deep ideas and
philosophy into actual practical
applications that not only make it
clear whether we’re making any progress
or not, but also allow us to relieve
suffering and make life better for
all sentient beings, and enable
to, you know, enable us and others to reach
their full potential. So these are very
practical things, I think.
Behavioral science, I suppose, is more
subjective, and mathematics and physics
is more objective? Would that
be the clear difference?
The idea basically is that
where something is on that
spectrum, and I’ve called it the spectrum
of persuadability. You could call it the
spectrum of intelligence or agency or something
like that. I like the notion of the spectrum of
of the spectrum of persuadability, because
it’s an engineering approach. It means
that these are not things you can
decide or have feelings about from
a philosophical armchair. You have
to make a hypothesis about
which tools, which interaction
protocols you’re going to bring to a given system, and
then we all get to find out how that worked out for
you, right? So you could be wrong in
many ways, in both directions. You can
guess too high or too low, or wrong in
various ways, and then we can all find out
how that’s working out. And so, I do
think that the behavior of certain
objects is well-described
by specific formal rules,
and we call those things the subject of
mathematics. And then there are some other
things whose behavior really
requires the kinds of
tools that we use in behavioral
cognitive neuroscience, and those
are other kinds of minds
that we think we study in
biology or in psychology
or other sciences.
Why are you using the term persuadability?
Who are you persuading, and of what?
Well-
In this context.
Yeah, the beginning of my work
is very much in regenerative
medicine, in bioengineering,
things like that. So
for those kinds of systems, the
question is always, how do you get the
system to do what you want it to do? So
there are cells, there are molecular
networks, there are materials, there
are organs and tissues and synthetic
beings and biobots and whatever.
So the idea is, if I want your
cells to regrow a limb, for example, if you’re
injured and I want your cells to regrow a
limb, I have many options. Some
of those options are I’m going to
micromanage all of the molecular
events that have to happen, right? And
there’s an incredible number of those.
Or maybe I just have to micromanage the
cells and the stem cell
kinds of signaling factors.
Or maybe actually I can give
the cells a very high-level
prompt that says, “You really should
build a limb,” and convince them to do
it, right? And so which of those is
possible? I mean, clearly people have a
lot of intuitions about that. If you ask
standard people in regenerative medicine and
molecular biology, they’re going to say, “Well, that
convincing thing is crazy. What we really should
be doing is talking to the cells, or better
yet, the molecular networks.” And
in fact, all the excitement of the
biological sciences today are at
single molecule approaches and
big data and genomics and all of
that. The assumption is that,
going down is where the action’s
going to be, going down in scale,
and… I think that’s wrong.
But the thing that we can say
for sure is that you can’t
guess that. You have to do
experiments and you have to see because you
don’t know where any given system is on
that spectrum of persuadability. And it
turns out that every time we look and we
take tools from behavioral science,
so learning different kinds of
training, different kinds of
models that are used in active
inference and surprise minimization
and perceptual multi-stability
and visual illusions and all these
kinds of interesting things. Stress
perception and memory, active
memory reconstruction.
All these interesting things.
When we apply them outside the
brain to other kinds of living
systems, we find novel discoveries
and novel capabilities, actually being able
to get the material to do new things that
nobody had ever found before.
And precisely because I think
that people didn’t look at it from those
perspectives, they assumed that it was a
low-level kind of thing. So when I say
persuadability, I mean different types
of approaches, right? And we all
know if you want to persuade your
wind-up clock to do something,
you’re not going to argue with it or make it feel guilty or anything.
You’re going to have to get in there with a wrench and you’re gonna have
to, you know, tune it up and do whatever.
If you want to do that same thing to a
cell or a thermostat or an animal or
a human, you’re going to be using
other sets of tools that we’ve given
other names to. And so that’s… Now,
of course, that spectrum, the important thing is that
as you get to the right of that spectrum, whereas the
agency of the system goes up, it is no
longer just about persuading it to do
things. It’s a bidirectional relationship,
what Richard Watson would call a mutual
vulnerable knowing. So the idea
is that on the right side of that
spectrum, when systems reach the
higher levels of agency, the idea is
that you are willing to let that
system persuade you of things as
well. You know, in molecular biology, you do
things, hopefully the system does what you want to
do, but you haven’t changed. You’re
still exactly the way you came in.
But on the right side of that spectrum, if
you’re having interactions with even cells, but
certainly, you know, dogs,
other animals, maybe other
creatures soon, you’re not the same at
the end of that interaction as you were
going in. It’s a mutual bidirectional
relationship. So it’s not just you persuading
something else, it’s not you
pushing things. It’s a mutual
bidirectional set of
persuasions, whether those are
purely intellectual or of other kinds.
So in order to be
effective at persuading an
intelligent being, you yourself have to be
persuadable. So the closer in intelligence
you are to the thing you’re trying
to persuade, the more persuadable
you have to become, hence the mutual
vulnerable knowing. What a term.
Yeah. Richard, you should talk to Richard
as well. He’s an amazing guy and he’s got
some very interesting ideas about
the intersection of cognition and
evolution. But I think what you
bring up is very important because,
There has to be a kind of impedance match between
what you’re looking for and the tools that
you’re using. I think the reason
physics always sees mechanism and
not minds is that physics uses
low agency tools. You’ve got
voltmeters and rulers and things like
this. And if you use those tools as your
interface, all you’re ever going to
see is mechanisms and those kinds
of things. If you want to see minds, you
have to use a mind, right? You have to have
some degree of resonance between your
interface and the thing you’re hoping to find.
You said this about physics before. Can
you just linger on that and expand on it,
what you mean, why
physics is not enough to
understand life, to understand mind,
to understand intelligence? You
make a lot of controversial statements with your
work. That’s one of them ‘cause there’s a lot of
physicists that believe they can understand
life, the emergence of life, the origin of
life, the origin of intelligence
using the tools of physics.
In fact, all the other tools
are a distraction to those
folks. If you want to understand
fundamentally anything, you have to start at
physics to them. And you’re saying,
“No, physics is not enough.”
Here’s the issue. Everything
here hangs on what it means to
understand, okay? For me, because to
understand doesn’t just mean have some sort of
pleasing model that seems to capture
some important aspect of what’s going
on. It also means that you have
to be generative and creative
in terms of capabilities. So for
me, that means if I tell you this
is what I think about cognition in cells
and tissues, it means, for example,
that I think we’re going to be able
to take those ideas and use them
to produce new regenerative medicine that
actually helps people in various ways, right?
It’s just an example. So if you think
as a physicist you’re going to have a
complete understanding of
what’s going on from that
perspective of fields and particles,
and, you know, who knows what else is
at the bottom there.
Does that mean then that when
somebody is missing a finger or has a
psychological problem, or
you know, has these other
high-level issues, that you have something for
them, that you’re going to be able to do something?
Because my claim is that you’re
not going to, and even if,
even if you have some theory of physics
that is completely compatible with
everything that’s going on, that is… it’s not
enough. That’s not specific enough to enable you
to solve the problems you need to solve. In the
end, when you need to solve those problems,
the person you’re going to go to is not
a physicist. It’s going to be either
a biologist or a psychiatrist, or who
knows, but it’s not going to be a
physicist. And the simple example
is this. You know, let’s say,
let’s say someone comes in here and tells
you a beautiful mathematical proof, okay?
It’s just really, you know, deep and beautiful,
and there’s a physicist nearby, and he
says, “Well, I know exactly what happened.
There were some air particles that moved
from that guy’s mouth to your
ear. I see what goes on. It moved
the cilia in your ear and the electrical
signals went up to your brain.” I mean, we have
a complete accounting of what happened, done and
done. But if you want to understand
what’s the more important
aspect of that interaction, it’s not going to be found in
the Physics Department. It’s going to be found in the Math
Department. So that’s my only claim is
that physics is an amazing lens with which
to view the world, but you’re capturing
certain things, and if you want to
stretch to sort of encompass
these other things, it
just, we just don’t call that physics
anymore, right? We call that something else.
Okay. But you’re kind of
speaking about the super
complex organisms. Can we go to the
simplest possible thing where you first
take a step over the line, the Cartesian
cut, as you’ve called it, from the
non-mind to mind, from
the non-living to living?
The simplest possible
thing, isn’t that in the
realm of physics to understand? How do
we understand that first step where
you’re like, that thing is no
mind, probably non-living, and
here’s a living thing that
has a mind. That line.
I think that’s a really interesting line. Maybe
you can speak to the line as well, and can
physics help us understand it?
Yeah, let’s talk about it. Well, first of all,
of course it can. I mean, it can help, meaning
that I’m not saying physics is not helpful. Of
course it’s helpful. It’s a very important lens on
one slice of what’s going on in any of
these systems. But I think the most
important thing I can say about that
question is I don’t believe in any such
line. I don’t believe any of
that exists. I think there is
a continuum. I think we as humans like
to demarcate areas on that continuum
and give them names because
it makes life easier, and then we have a
lot of battles over you know, so-called
category errors when people, they
transgress those those categories. I
think most of those categories at this
point, they may have done some good
service at the beginning of when the scientific
method was getting started and so on.
I think at this point they mostly hold back
science. Many, many categories that we
can talk about are at this point very
harmful to progress, because what those
categories do is they prevent you
from hoarding tools. If you think
that living things are
fundamentally different
from non-living things, or if you think
that cognitive things are these like
advanced brainy things that are
very different from other kinds of
systems, what you’re not going to do is
take the tools that are appropriate to
these to, to these kind of cognitive systems,
right? So the, so the tools that have been
developed in, in behavioral science and so on,
you’re never going to try them in other contexts
because, because you’ve already decided that there’s
a categorical difference, that it would be a
categorical error to apply them. And, and people
say this to me all the time is that you’re
making a category error, and as, as if these
categories were given to us, you know, about
from, from, from on high, and we have to,
we have to obey them forevermore. The
categories should change with the
science. So yeah, I don’t believe in
any such line, and I think
a physics story is very
often a useful part of the
story, but for most interesting
things, it’s not the entire story.
Okay.
So if there’s no line, is it still useful
to talk about things like the origin of
life? That’s the, the, one of the big
open mysteries before us as a human
civilization, as scientifically
minded curious homo sapiens. How did
this whole thing start?
Are you saying there is no
start? Is there a point where you
could say that invention right there
was the start of it all on Earth?
My suggestion is that
much better than trying
to… in my experience, much better
than trying to define any kind
of a line, okay, because, because inevitably
I’ve never, I’ve never found, and the
people try to … Th- y- you know, we play this
game all the time when I make my continuum claim.
Then people try to come up, “Okay, well, what
about this?” And I haven’t found one yet that
really shoots that down that, that you can’t
zoom in and say, “Yeah, okay, but right
before then this happened, and if we really look
close, like here’s a bunch of steps in between,”
right? Pretty much everything ends up being
a continuum, but here’s what I think is much
more interesting than trying to make that
line. I think what’s, what’s really more
useful is trying to understand the
transformation process. What is it that
happened to scale up? And I’ll give
you a really dumb example. And we al-
and we always get into this ‘cause people,
people often really, really don’t like this
continuum view. The word adult, right?
E- everybody is going to say, “Look, I know what
a baby is. I know what an adult is. You’re crazy
to say that there’s no difference.” I’m not saying
there’s no difference. What I’m saying is the word
adult is really helpful in court
because, because, because you just need
to move things along, and so we’ve
decided that if you’re 18, you’re an
adult. However, what it hides
is, is … Th- what, what it
completely conceals is the fact
that first of all, nothing happens
on your 18th birthday, right? That’s
special. Second, if you actually look
at the data, the car rental companies actually
have a much better estimate because they
actually look at the accident statistics and
they’ll say it’s about 25 is really what
you’re looking for, right? So theirs is a
little better. It’s less arbitrary. But in
either case, what it’s hiding is
the fact that we do not have a good
story of what happened from the time that
you were an egg to the time that you’re the
supposed adult and what is
the scaling of personal
responsibility, decision-making,
judgment. These are deep
fundamental questions. Nobody
wants to get into that every
time somebody, you know, has a
traffic ticket. So, okay, we’ve just
decided that there’s this adult idea.
And of course, it does come up in court
because then somebody has a brain tumor or
somebody’s eaten too many Twinkies or something
has happened. You say, “Look, that wasn’t me. Whoever
did that, I was on drugs.” “Well, why’d you take the
drugs?” “Well, that was, you know, that was
yesterday. Me today, this is I’m…” Right?
So we get into these very deep questions
that are completely glossed over
by this idea of an adult. So I
think once you start scratching the
surface, most of these categories are
like that. They’re convenient and they’re
good. You know, I get into this
with neurons all the time. I’ll ask
people, “What’s a neuron? Like, what’s
really a neuron?” And yes, if you’re
in neurobiology 101, of course you
just say like, “These are what
neurons look like. Let’s just study the neuroanatomy
and we’re done.” But if you really want to understand
what’s going on, well, neurons
develop from other types of
cells and that was a slow and
gradual process, and most of the
cells in your body do the things that neurons
do. So what really is a neuron, right?
So once you start scratching this, this
happens, and I have some things that I
think are coming out of our lab and others
that are very interesting about the
origin of life. But I don’t think it’s
about finding that one boon like this is.
Yeah, there will be… There are innovations,
right? There are innovations that
allow you to scale in an
amazing way, for sure. And
there are lots of people that study those, right?
So things like thermodynamic, kind of metabolic
things and all kinds of architectures
and so on. But I don’t think it’s about
finding a line. I think it’s
about finding a scaling process.
… the scaling process, but
then there is more rapid
scaling and there are slower
scaling. So innovation,
invention, I think is
useful to understand so you
can predict how likely it is
on other planets, for example.
Or to be able to describe
the likelihood of these kinds of
phenomena happening in certain
kinds of environments. Again,
specifically in answering how
many alien civilizations there are.
That’s why it’s useful. But it
is also useful on a scientific
level to have categories, not just
because it makes us feel good and fuzzy
inside, but because it makes conversation
possible and productive, I think. If
everything is a spectrum, it’s…
It becomes difficult to make
concrete statements, I think.
Like, we even use the terms
of biology and physics.
Those are categories. Technically,
it’s all the same thing,
really. Fundamentally, it’s all the same.
There’s no difference between biology and
physics. But it’s a useful
category. If you go to the physics
department and the biology department,
those people are different in,
in… at some kind of categorical way. So
somehow, I don’t know what the chicken or
the egg is, but the categories. Maybe
the categories create themselves
because of the way we think about them and
use them in language, but it does seem
useful.
Let me make the opposite argument.
They’re absolutely useful. They’re useful
specifically when you want to
gloss over certain things.
The categories are exactly useful when
there’s a whole bunch of stuff. And this
is what’s important about science, is like
the art of being able to say something
without first having to
say everything, right?
Which would make it impossible. So
categories are great when you want to say,
“Look, I know there’s a bunch of stuff
hidden here. I’m going to ignore all that
and we’re just going to like, let’s
get on with this particular thing.”
And all of that is great as long as you don’t
lose track of the stuff that you glossed
over. And that’s what I’m afraid is
happening in a lot of different ways.
And in terms of… Look, I’m very
interested in life beyond Earth
and all these kinds of things so
that we should also talk about
what I call SUTI, S-U-T-I,
the search for unconventional
terrestrial intelligences. I think
we got much bigger issues than
actually recognizing aliens off
Earth. But I’ll make this claim.
I think the categorical stuff is
actually hurting that search. Because,
if we try to define
categories with the kinds of
criteria that we’ve gotten used
to, we are going to be very
poorly set up to recognize life in
novel embodiments. I think we have
a kind of mind blindness. I think this
is really key. To me, the cognitive
spectrum is much more interesting
than the spectrum of life. I think
really what we’re talking about is
the spectrum of cognition. And,
it is… Well, I know it’s weird as a
biologist to say, I don’t think life is all
that interesting a category. I think the
categories of different types of minds, I
think is extremely interesting. And
to the extent that we think our
categories are complete and are cutting
nature at its joints, we are going to
be very poorly placed to
recognize novel systems. So for
example, a lot of people will say, “Well,
this is intelligent and this isn’t,” right?
And there’s a binary thing. And
that’s useful occasionally; that’s
useful for some things. I would like
to say, instead of that, let’s make a
let’s admit that we have a spectrum.
But instead of just saying, “Oh,
look, everything’s intelligent,” right? Because
if you do that, you’re right, you can’t
do anything after that. What I’d like to say
instead is, no, you have to be very specific
as to what kind and how much. In other
words, what problem spaces they’re operating
in? What kind of mind does it have? What
kind of cognitive capacities does it have?
You have to actually be much more specific.
And we can even name, right? That’s fine.
We can name different types of, I mean,
this is doing predictive processing.
This can’t do that, but it can form
memories. What kind? Well, habituation
and sensitization, but not associative
conditioning. It’s fine to have categories
for specific capabilities, but it actually
makes for much more rigorous
discussions because it makes you say
what is it that you are claiming this thing
does, and it works in both directions.
So, some people will say, “Well,
that’s a cell. That can’t be
intelligent.” And I’ll say, “Well, let’s be
very specific. Here are some claims about…
Here’s some problem solving that it’s doing.
Tell me why that doesn’t… you know,
why doesn’t that match?” Or in the opposite direction,
somebody comes to me and says, “You’re right,
you’re right. You know, the whole, the whole solar
system, man. It’s just like this amazing…” I’m like,
“Whoa, okay. Well, what is it
doing?” Like, “Tell me what tools of
cognitive and behavioral science are you
using to reach that conclusion,” right?
And so I think it’s actually much more productive
to take this operational stance and say, “Tell,
tell me what protocols you think you can
deploy with this thing that would lead you
to use these terms.”
To have a bit of a meta-conversation about
the conversation. I should say that part of
the persuadability argument that
we two intelligent creatures
are doing is me playing devil’s advocate
every once in a while. And you did
the same, which is kind of interesting, taking
the opposite view and see what comes out.
Because you don’t know the result
of the argument until you have the
argument, and it seems productive to just
take the other side of the argument.
For sure. It’s a very
important thinking aid to
first of all, you know, what they call steel
manning, right? To try to make the strongest
possible case for the other side and to
ask yourself, “Okay, what are all the,
places that I am sort of glossing over
because I don’t know exactly what
to say? And where are all the holes in
the argument, and what would a, you
know, a really good critique
really look like?” Yeah.
Sorry to go back there just to linger on the term
because it’s so interesting, persuadability.
Did I understand correctly that you
mean that it’s kind of synonymous with
intelligence? So it’s
an engineering-centric
view of an intelligence system. Because
if it’s persuadable, you’re more
focused on how can I steer the goals
of the system, the behaviors of the
system? Which, meaning an intelligence
system, maybe is a goal-oriented,
goal-driven system with agency. And
when you call it persuadable, you’re
thinking more like, “Okay, here’s
an intelligence system that I’m
interacting with that I would like
to get it to accomplish certain
things.” But fundamentally,
they’re synonymous
or correlated, persuadability
and intelligence?
They’re definitely correlated. So,
let me… I wanna preface this with
one thing. When I say it’s an
engineering perspective, I don’t mean
that the standard tools that
we use in engineering and this
idea of enforced control and
steering is how we should
view all of the world. I’m not saying
that at all. And I wanna be very clear
on that because people do
email me and say, “This
engineering thing. You’re going to drain the
life and the majesty out of these high-end,
like, human conversation.” My whole
point is not that at all. It’s
that of course at the right side of the
spectrum it doesn’t look like engineering
anymore, right? It looks like friendship and
love and psychoanalysis and all these other
tools that we have. But here’s what
I want to do. I want to be very
specific to my colleagues in regenerative medicine
and everything. Just imagine if I, you know, if
I went to a bioengineering department
or a genetics department and I started
talking about high-level, you know, cognition
and psychoanalysis, right? They don’t
want to hear that. So I focus
on the engineering approach…
Because I want to say, look,
this is not a philosophical
problem. This is not a linguistics
problem. We are not trying to define
terms in different ways to make anybody feel
fuzzy. What I’m telling you is, if you want to
reach certain capabilities, if you want
to reprogram cancer, if you want to
regrow new organs, you want to defeat aging,
you want to do these specific things,
you are leaving too much on the table by
making an unwarranted assumption that the
low-level tools that we have, so
these are the rules of chemistry and
the kind of molecular rewiring, that those
are going to be sufficient to get to
where you want to go. It’s an assumption
only, and it’s an unwarranted
assumption, and actually, we’ve done
experiments now, so not philosophy,
but real experiments, that if you take these
other tools, you can in fact persuade the
system in ways that has never been done
before. And we can unpack all that.
But it is absolutely correlated
with intelligence, so let
me flesh that out a little bit.
What I think is scaling in all
of these things, right? Because I keep talking
about the scaling, so what is it that’s scaling?
What I think is scaling is something I
call the cognitive light cone, and the
cognitive light cone is
the size of the biggest
goal state that you can pursue.
This doesn’t mean how far
do your senses reach? This doesn’t mean how
far can you affect it? So the James Webb
Telescope has enormous sensory
reach, but that doesn’t mean that’s
the size of its cognitive light cone. The
size of the cognitive light cone is the
scale of the biggest goal you can actively
pursue, but I do think it’s a useful
concept to enable us to think about very
different types of agents of different
composition, different provenance, you
know, engineered, evolved, hybrid,
whatever, all in the same framework.
And by the way, the reason I use light
cone is that it has this idea from physics
that you’re putting space and time in the
same diagram, which I like here. So if
you tell me that all your goals revolve
around maximizing the amount of
sugar, the amount of sugar in
this, in this, you know,
10-20 micron radius of
spacetime and that you have, you know, 20
minutes memory going back and maybe five minutes
predictive capacity going forward, that tiny
little cognitive light cone, I’m gonna say
probably a bacterium. And if you say to
me that, “Well, I’m able to care about
several hundred yards sort
of scale, I could never care
about what happens three weeks from now, two towns
over, just impossible,” I would say you might
be a dog. And if you say
to me, “Okay, I care
about really what happens, you know,
the financial markets on Earth,
you know, long after I’m dead and this
and that,” I’d say you’re probably a
human. And if you say to me,
“I care in the linear range, I
actively, I’m not just saying it, I can
actively care in the linear range about all
the living beings on this planet,” I’m
gonna say, “Well, you’re not a standard
human. You must be something else.” Because
humans, I don’t know, standard humans today,
I don’t think can do that. You must be some kind of a
bodhisattva or some other thing that has these massive
cognitive light cones. So I think what’s
scaling from zero, and I do think
it goes all the way down, I think we can talk
about even particles doing something like this.
the cognitive light cone. And so now this
is an interesting… here, I’ll try for a
definition of life or whatever, for whatever
it’s worth. I spent no time trying to
make that stick, but if
we want it to, I think we
call things alive to the extent that
the cognitive light cone of that
thing is bigger than that of its
parts. So, in other words, rocks aren’t very
exciting because the things it knows how to
do are the things that its parts already know how to
do, which is follow gradients and things like that.
But living things are
amazing at aligning their
their competent parts so that the
collective has a larger cognitive light
cone than the parts. I’ll give you a very
simple example that comes up in biology
and that comes up in our cancer
program all the time. Individual cells
have little tiny cognitive
light cones. What are their
goals? Well, they’re trying
to manage pH, metabolic
state, some other things. There are some
goals in transcriptional space, some goals
in metabolic space, some goals in
physiological state space, but
they’re generally very tiny goals.
One thing evolution did was to provide
a kind of cognitive glue, which we can
also talk about, that ties them
together into a multicellular system.
And those systems have grandiose
goals. They’re making limbs, and if you’re
a salamander limb and you chop it off, they
will regrow that limb with the right number
of fingers, then they’ll stop when it’s done.
The goal has been achieved. No individual cell
knows what a finger is or how many fingers you’re
supposed to have, but the collective absolutely
does. And that process of growing that
cognitive light cone from a single cell
to something much bigger, and of course
the failure mode of that process,
so cancer, right? When cells
disconnect, they physiologically disconnect
from the other cells. Their cognitive light
cone shrinks. The boundary between self and
world, which is what the cognitive light cone
defines, shrinks. Now they’re back to an amoeba.
As far as they’re concerned, the rest of the
body is just external environment, and they
do what amoebas do. They go where life
is good. They reproduce as much as they
can, right? So that cognitive light cone,
that is the thing that I’m talking about
that scales. So when we are looking for
life, I don’t think we’re looking
for specific materials. I don’t
think we’re looking for specific metabolic
states. I think we’re looking for
scales of cognitive light cone.
We’re looking for alignment of parts
towards bigger goals in spaces that
the parts could not comprehend.
And so cognitive light cone, just to make
clear, is about goals that
you can actively pursue
now. You said linear, like
we’re within reach immediately.
No, I didn’t. Sorry, I didn’t mean that.
First of all, the goal necessarily is
often removed in time. So, in other
words, when you’re pursuing a goal, it
means that you have a separation between
current state and target state, at minimum.
Your thermostat, right? Let’s just think
about that. There’s a separation in time
because the thing you’re trying to make happen,
so that the temperature goes to a certain level,
is not true right now. And all your actions are
going to be around reducing that error, right?
That basic homeostatic loop is all
about closing that gap. When I meant…
When I said linear range, this
is what I meant. If I say to
you, “This terrible thing
happened to, you know, ten
people.” And you have some
degree of activation about
it. And then I say, “No, no, no,
actually it was 100, you know
10,000 people.” You’re not a
thousand times more activated
about it. You’re somewhat more activated, but it’s
not a thousand. And if I say, “Oh my God, it was
actually 10 million people,” you’re,
you’re not a million times more activated.
You don’t have that capacity in the
linear range. You sort of… Right?
If you think about that curve, we sort
of reach a saturation point. I have some
amazing colleagues in the Buddhist community with
whom we’ve written some papers about this, the
radius of compassion is like, “Can
you grow your cognitive system
to the point that, yeah, it really isn’t just
your family group, it really isn’t just the
hundred people you know in
your circle? Can you grow
your cognitive light cone to the point
where, no, no, we care about the
whole, whether it’s all of humanity or the
whole ecosystem, or the whole whatever?
Can you actually care about that
the exact same way that we now care
about a much smaller set of people?
That’s what I mean by linear range.
But this is separated by time
like a thermostat, but a
bacteria… I mean, if you zoom out
far enough, a bacteria could
be formulated to have a
goal state of creating human
civilization. Because if you look at
the… You know, bacteria- …has a role
to play in the whole history of Earth.
So, you know, if you anthropomorphize the
goals of a bacteria enough, it has a
concrete role to play in the history of
the evolution- …of human civilization.
So you do need to, when you define a
cognitive light cone, you’re looking
at directly short-term behavior.
Well, no. How do you know what the
cognitive light cone of something is?
Because as you’ve said, it could be
almost anything. The key is you have
to do experiments. And the way you do
experiments is you put barriers. You have to do
interventional experiments. You have to put
barriers between it and its goal, and you have
to ask what happens. And intelligence
is the degree of ingenuity that it
has in overcoming barriers
between it and its goal. Now,
if it were to be that,
now, this is, I think, a
totally doable but impractical and
very expensive experiment. But
you could imagine setting up a
scenario where the bacteria were
blocked from becoming more complex.
And you can ask if they would try to
find ways around it, or whether
their goals are actually
metabolic. And as long as those goals are met, they’re
not going to actually get around your barrier.
The business of putting barriers
between things and their goals
is actually extremely powerful
because we’ve deployed it in
all kinds of… I’m sure we’ll get to this
later, but we’ve deployed it in all kinds of
weird systems that you wouldn’t think are
goal-driven systems. And what it allows
us to do is to get beyond just the
anthropomorphizing claims of saying, “Oh, yeah, I
think this thing is trying to do this or that.”
The question is, well,
let’s do the experiment.
And one other thing I want to say about
anthropomorphizing is people say this to me
all the time. I don’t think that
exists. I think that’s kind
of like, you know. And,
and I’ll tell you why. I
think it’s like heresy
or, like other, other
terms that aren’t really a
thing. Because if you unpack it,
here’s what anthropomorphism
means. Humans have a certain magic,
and you’re making a category
error by attributing that magic
somewhere else. My point is, we have the
same magic that everything has. We have a couple
of interesting things besides the cognitive
light cone and some other stuff, and it
isn’t that you have to keep the humans
separate because there’s some
bright line. It’s just that
same old… All I’m arguing for is the
scientific method, really. That’s really
all this is. All I’m saying is you can’t
just make pronouncements
such as, “Humans are this,”
and let’s not sort of
push that. You have to do
experiments. After you’ve done your experiments,
you can say either, “I’ve done it, and I’ve
found… Look at that. That thing actually can predict
the future for the next, you know, 12 minutes.
Amazing.” Or you say, “You know what? I’ve tried
all the things in the behaviorist handbook, they
just don’t help me with this. It’s a very low
level of…” Like, that’s it. It’s a very
low level of intelligence. Fine, right? Done.
So that’s really all I’m arguing for is an
empirical approach, and then things like
anthropomorphism go away. It’s just a matter
of, have you done the experiment,
and what did you find?
And that’s actually one of the things
you’re saying, that if you remove
the categorization of things,
you can use the tools-
… of one discipline on everything.
You could try.
to try and then see. That’s
the underpinnings of the
criticism of
anthropomorphization, because,
what is that? That’s like
psychoanalysis of another human
could technically be
applied to robots, to AI
systems, to more primitive
biological systems, and so on. Try.
Yeah. We’ve used everything
from basic habituation
conditioning all the way
through anxiolytics,
hallucinogens, all kinds of
cognitive modification on the
range of things that you wouldn’t believe. And by the
way, I’m not the first person to come up with this.
So there was a guy named Bose
well over 100 years ago who
was studying how anesthesia
affected animals and animal
cells, and drawing specific
curves around electrical
excitability. And he then
went and did it with
plants and saw some very similar
phenomena. And being the
genius that he was, he then said, “Well,
how do I… I don’t know when to stop,
but there’s no… You know, everybody thinks
we should have stopped long before plants
‘cause people made fun of him for that. And he’s like,
“Yeah, but the science doesn’t tell us where to stop.
The tool is working, let’s keep going.”
And he showed interesting phenomena on
materials, metals, and other kinds
of materials, right? And so-
The interesting thing is that,
yeah, there is no… there is no
generic rule that tells
you when you need to stop.
We make those up. Those are completely
made up. You have to just do the science
and find out.
Yeah, we’ll probably get to it.
You’ve been doing recent work on
looking at computational systems,
even trivial ones like algorithms,
sorting algorithms…
…and analyzing them in a behavioral kind of
way. See if there’s minds inside those sorting
inside those sorting algorithms.
And, of course, let me make a
pothead statement question
here that you could start
todo things like trying to do
psychedelics with a sorting algorithm.
And what does that even look like?
It looks like a ridiculous question
that’ll get you fired from most
academic departments, but it may be, if
you take it seriously, you could try
…and see if it applies. If a thing
could be shown to have some kind of
cognitive complexity, some
kind of mind, why not
apply to it the same kind of
analysis and the same kind of tools,
like psychedelics, that you
would to a complex human mind?
At least, it might be a
productive question to ask.
You’ve seen spiders on
psychedelics, like more primitive
biological organisms on
psychedelics. Why not try to see
what an algorithm does
on psychedelics? Anyway.
Well, the thing to remember is we don’t
have a magic sense or
really good intuition for
what the mapping is between the
embodiment of something and the
degree of intelligence it has.
We think we do because we have
an N of one example on Earth and
we know what to expect from cells,
snakes to primates, but we really
don’t. We don’t have, and this is,
we’ll get into more of the stuff
on the platonic space, but our
intuitions around that stuff is so
bad that to really think that we know
enough not to try things at this point
is, I think, really shortsighted.
Before we talk about the platonic
space, let’s lay out some
foundations. I think one useful
one comes from the paper,
A Technological Approach
to Mind Everywhere.
An experimentally grounded framework for
understanding diverse bodies and minds.
Could you tell me about this
framework, and maybe can you tell me
about Figure 1 from this paper
that has a few components?
One is the tiers of biological cognition that goes from group to whole
organism to whole tissue
organ, down to neural
network, down to
cytoskeleton, down to genetic
network, and then there’s
layers of biological
systems from ecosystem,
down to swarm, down to
organism, tissue, and then finally cell.
So, can you explain this figure
and can you explain the TAME,
so-called, framework? So,
this is the version 1.0
and there’s a kind of update, a 2.0
that I’m writing at the moment,
trying to formalize in a careful
way all the things that we’ve been
talking about here, and in particular
this notion of having to do
experiments to figure out
where any given system is on a
continuum. Let’s just start with
Figure 2 for a second, then we’ll
come back to Figure 1. First, just
to unpack the acronym, I like the
idea that it spells out TAME because
the central focus of this is
interactions and how do
you interact with a system
to have a productive interaction
with it? The idea is that cognitive
claims are really protocol claims. When you
tell me that something has some degree of intelligence, what you’re really saying is, “This
is the set of tools I’m going to deploy and
we can all find out how that
worked out for you.” And so,
technological, because I wanted to
be clear with my colleagues that
this was not a project in just
philosophy. This had very
specific, empirical implications that
are going to play out in engineering and
regenerative medicine and so on. A
technological approach to mind everywhere, this
idea that we don’t know yet where
different kinds of minds are to be
found, and we have to empirically
figure that out. So, what you see
here in figure two is basically this idea
that there is a spectrum, and I’m just
showing four waypoints along that
spectrum. As you move to the right of that
spectrum, a couple things happen: persuadability
goes up, meaning that the systems
become more reprogrammable, more plastic,
more able to do different things
than whatever they’re standardly doing. So, you
have more ability to get them to do new and
interesting things. The effort
needed to exert influence goes
down, that is, autonomy goes up.
To the extent that you are good at
convincing or motivating the system to
do things, you don’t have to sweat the
details as much, right? This also has
to do with what I call engineering
agential materials. When you engineer
wood, metal, plastic, things
like that, you are responsible for absolutely
everything because the material is not going to do
anything other than hopefully
hold its shape. If you’re
engineering active matter, or
you’re engineering computational
materials, or better yet, agential
materials like living matter,
you can do some very high-level
prompting and let the system then do
very complicated things that you don’t
need to micromanage. We all know
that that increases when you’re
starting to work with intelligent
systems like animals and humans and so
on. The other thing that goes down as
you get to the right is the amount of
mechanism or physics that you
need to exert the influence
goes down. So, if you know how
your thermostat is to be set
as far as its set point, you really don’t
need to know much of anything else, right?
You just need to know that it is a homeostatic
system and that this is how I change the
set point. You don’t need to know how the cooling and
heating plant works in order to get it to do complex things.
By the way, a quick pause just for people who
are listening: let me describe what’s in the
figure. There are four different
systems going up the scale of
persuadability. The first system
is a mechanical clock, then it’s
a thermostat, then it’s a
dog that gets rewards and
punishments, Pavlov’s dog,
and then finally a bunch of
very smart-looking humans communicating
with each other and arguing,
persuading each other using reasons.
There are arrows below that showing
persuadability going up as you go
up these systems from the mechanical
clock to a bunch of Greeks
arguing, and then going down as the effort
needed to exert influence, and once
again going down as mechanism knowledge
needed to exert that influence.
Yeah. I’ll give you an example about
that, panel C here with the dog.
Isn’t it amazing that humans
have been training dogs and
horses for thousands of years
knowing zero neuroscience?
amazing is that when I’m talking to you
right now, I don’t need to worry about
manipulating all of the synaptic proteins in your brain to make you understand what I’m saying
and hopefully remember it. You’re gonna do
that all on your own. I’m giving you very
thin, in terms of information
content, very thin prompts,
and I’m counting on you as a
multi-scale agential material to
take care of the chemistry
underneath, all right?
So you don’t need a wrench to convince me?
Correct. I don’t need, and I don’t need physics to
convince you, and I don’t need to know how you work.
Like, I don’t need to understand all of
the steps. What I do need to have is trust
that you are a multi-scale cognitive
system that already does that for
yourself, and you do. This is an amazing thing.
I know people don’t think about this enough, I
think. When you wake up in the
morning and you have social
goals, research goals, financial goals,
whatever it is that you have, in order for
you to act on those goals, sodium
and calcium and other ions have
to cross your muscle membranes. Those
incredibly abstract goal states
ultimately have to make the chemistry
dance in a very particular way,
right? Our entire body is a
transducer of very abstract things.
And by the way, not just our
brains, but our organs have anatomical goals
and other things that we can talk about,
because all of this plays out in
regeneration and development and so
on. But the scaling, right, of all
of these things, the way that you
regulate yourself is not by, “Oh my God,” you
don’t have to sit there and think, “Wow, I really
have to push some sodiums across
this membrane.” All of that
happens automatically, and that’s
the incredible benefit of these
multi-scale materials. So what I was
trying to do in this paper is a couple of
things. All of these were, by the way,
drawn by Jeremy Gay, who’s this amazing
graphic artist that works with me. First of
all, in panel A, which is the spiral I was
trying to point out, is that at every
level of biological organization, like we
all know we’re sort of nested
dolls of organs and tissues
and cells and molecules and whatever. But what I
was trying to point out is that this is not just
structural. Every one of
those layers is competent and
is doing problem-solving in different spaces,
and spaces that are very hard for us to
imagine. We humans are, because of our own
evolutionary history, we are so obsessed with
movement in three-dimensional
space. Even in AI you see
this all the time. They say, “Well, this
thing doesn’t have a robotic body, it’s not
embodied.” Yeah, it’s not
embodied by moving around in 3D
space, but biology has embodiments in all
kinds of spaces that are hard for us to
imagine, right? So your cells and
tissues are moving in high-dimensional
physiological state
spaces, in gene expression
state spaces, in anatomical
state spaces. They’re doing
that perception, decision-making,
action loop that we do in
3D space when we think about robots wandering
around your kitchen. They’re doing those
loops in these other spaces. And so the first
thing I was trying to point out is that every
layer of your body has its own
ability to solve problems in those
spaces. And then on the right,
what I was saying is that
this distinction between, you know, people say,
“Well, there are living beings and then there are
engineered machines,” and then they often follow up with
all the things machines are never gonna be able to do and
whatever. And so what I was trying
to point out here is that it is very
difficult to maintain those kinds
of distinctions, because life is
incredibly interoperable.
Life doesn’t really care
if the thing it’s working with was
evolved through random trial and error or was
engineered with a higher degree of agency,
because at every level within the cell, within the tissue, within
the organism, within the collective,
you can replace and substitute
engineered systems with
naturally evolved systems.
And that question of, “Is it real, is
it biology or is it technology?” I
don’t think is a useful question anymore. So
I was trying to warm people up with this idea
that what we’re going to do now
is talk about minds in general,
regardless of their history or their composition.
It doesn’t matter what you’re made of.
It doesn’t matter how you got here. Let’s talk
about what you’re able to do and what your inner
world looks like. That
was the goal of that.
Is it useful, as a thought
experiment, as an experiment
of radical empathy, to try to
put ourselves in the space
of the different minds at each stage
of thespiral? Like, what state space
is human civilization as
a collective embodied?
Like, what does it operate
in? So humans, individual
organisms, operate in 3D
space. That’s what we
understand. But when there’s a bunch of
us together, what are we doing together?
It’s really hard, and you have to do experiments,
which at larger scales are really difficult.
But there is such a thing?
There may well be. We have to do experiments.
I don’t know. Here’s an example:
Somebody will say to me, “Well, you know, with
your kind of panpsychist view, you might as well
think the weather is agential
too.” It’s like, “Well,
I can’t say that, but we don’t know,
but have you ever tried to see if a
hurricane has habituation or
sensitization?” Maybe. We haven’t done the
experiment. It’s hard, but you could,
right? And maybe weather systems can
have certain kinds of memories. I have
no idea. We have to do experiments. So I
don’t know what the entire human society is
doing, but I’ll just give you a simple example
of the kinds of tools, and we’re
actively trying to build tools now
to enable radically different
agents to communicate. So,
we are doing this using
AI and other tools to try
and get this kind of communication
going across very different
spaces. I’ll just give you a
very kind of dumb example of
how that might be. Imagine that
you’re playing tic-tac-toe against
an alien, so you’re in a
room. You don’t see him. And
so you draw the tic-tac-toe thing
on the board, on the floor. And
you know what you’re doing. You’re trying
to make straight lines with Xs and
Os, and you’re having a nice game. It’s
obvious that he understands the process.
Like, sometimes you win, sometimes you lose.
Like, it’s obvious. In that one little
segment of activity,
you guys are sharing a
world. What’s happening in the
other room next door? Well, let’s
say the alien doesn’t know anything about
geometry. He doesn’t understand straight
lines. What he’s doing is
he’s got a box, and it’s full
of basically billiard balls, each one of
which has a number on it. And all he’s
doing is he’s looking through the
box to find billiard balls whose
numbers add up to 15. He
doesn’t understand geometry at
all. All he understands is arithmetic. You
don’t think about arithmetic, you think
geometry. The reason you guys are playing
the same game is that there’s this magic
square, right? that somebody constructed
that basically is a three-by-three
square, where if you pick the numbers
right, they add up to 15. He has no
idea that there’s a geometric
interpretation to this.
He is solving the problem that
he sees, which is totally
algebraic. You don’t know anything about that. But
if there is an appropriate interface like this magic
square, you guys can share that
experience. You can have an experience. It
doesn’t mean you start to think like him. It means that
you guys are able to interact in a particular way.
Okay, so there’s a mapping
between the two different
ways of seeing the world that allows
you to communicate with each other.
Of seeing a thin slice of the world.
Thin slice of the world. How do you
find that mapping? So you’re saying
we’re trying to figure out ways
of finding that mapping…
…for different kinds of systems.
What’s the process for doing that?
So, the process is
twofold. One is to get a
better understanding of
what the system… what
space is the system navigating, what goals
does it have, what level of ingenuity does
it have to reach those goals. For example,
xenobots, right? We make xenobots. This is…
Or anthropods. These are biological
systems that have never existed on Earth
before. We have no idea what
their cognitive properties
are. We’re learning. We found some things. But
you can’t predict that from first principles,
because they’re not at all what their
past history would inform you of.
Can you actually explain briefly what
a xenobot is and what an anthropod is?
So one of the things that we’ve been doing
is trying to create novel beings that
have never been here before. The
reason is that typically when
you have a biological system,
an animal or a plant, and you
say, “Hey, why does it have certain
forms of behavior, certain forms
of anatomy, certain forms of physiology?
Why does it have those?” The answer is
always the same. Well, there’s
a history of evolutionary
selection, and there’s a long,
long history going back of
adaptation, and there’s certain environments,
and this is what survived, and so that’s why it
has. So what I wanted to do was
break out of that mold, and to
basically force us as a community
to dig deeper into where these
things come from. And that means taking away the
crutch where you just say, “Well, it’s evolutionary
selection that’s… That’s why it looks like
that.” So in order to do that, we have to
make artificial synthetic beings now.
To be clear, we are starting with
living cells, so it’s not that they
had no evolutionary history. The
cells do. They had evolutionary history
in frogs or humans or whatever. But the
creatures they make and the capabilities that
these creatures have were never directly
selected for. And in fact, they never existed.
So you can’t tell the same kind of story.
And what I mean is, we can take
epithelial cells off of an early frog
embryo, and you don’t change
the DNA. No synthetic biology
circuits, no material scaffolds,
no nanomaterials, no weird drugs,
none of that. What we’re mostly
doing is liberating them
from the instructive
influences of the rest of the
cells that they were in in their bodies. And
so when you do that, normally these cells,
are bullied by their neighboring cells
into having a very boring life. They
become a two-dimensional outer covering
for the embryo, and they keep out the
bacteria, and that’s that. So you might ask, “Well,
what are these cells capable of when you take
them away from that influence?” So
when you do that, they form another
little life form we call
a xenobot. And it’s this
self-motile little thing that has cilia
covering its surface. The cilia are
coordinated so they row against the water, and
then the thing starts to move, and has all kinds
of amazing properties. It has different
gene expression, so it has its
own novel transcriptome. It’s
able to do things like kinematic
self-replication, meaning make
copies of itself from loose
cells that you put in its environment.
It has the ability to respond to sound,
which normal embryos don’t
do. It has these novel
capacities. And we did that, and we said,
“Look, here are some amazing features of this
novel system. Let’s try to
understand where they came from.”
And some people said, “Well,
maybe it’s a frog-specific
thing, you know? Maybe this is just
something unique to frog cells.” And
so we said, “Okay, what’s the furthest
you can get from frog embryonic cells?
How about human adult cells?”
And so we took cells from
adult human patients who were
donating tracheal epithelia for
biopsies and things like that, and
those cells, again, no genetic
change, nothing like that. They self-organized
into something we call anthropods.
Again, self-motile little
creature. 9,000 different
gene expressions. So about half
the genome is now different. And
They have interesting abilities.
For example, they can heal human
neural wounds. So in
vitro, if you plate some
neurons and you put a big scratch through it so you
damage them, anthropods can sit down, and they will,
they will spontaneously, without us
having to teach them to do it, they will
spontaneously try to
knit the neurons across.
What is this video that
we’re looking at here?
So this is an anthropod. So often when I give
talks about this, I show people this video,
and I say, “What do you think this is?” And
people will say, “Well, it looks like some
primitive organism you got from
the bottom of a pond somewhere.”
And I’ll say, “Well, what do you think the genome would
look like?” And they say, “Well, the genome would
look like some primitive creature.” Right? If
you sequence that thing, you’ll get 100% Homo
sapiens. And that doesn’t look like any
stage of normal human development. It
doesn’t act like any stage of human
development. It has the ability to
move around. It has, as I said,
over 9,000 differential gene
expressions. Also
interestingly, it is younger,
than the cells that it comes from. So it
actually has the ability to roll back its age,
and we could talk about that and
what the implications of that are.
But to go back to your original question,
what we’re doing with these kind of
systems …
Trying to talk to it.
We’re trying to talk to it. That’s exactly right. And
not just to this. We’re trying to talk to molecular
networks. So, we found a couple years
ago, we found that gene regulatory
networks, never mind the cells, but
the molecular pathways inside of cells
can have several different kinds of
learning, including Pavlovian conditioning.
And what we’re doing now is trying to talk
to it. The biomedical applications are
obvious. Instead of, “Hey, Siri,” you
want, “Hey, liver, why do I feel like crap
today?” And you want an answer. “Well, you
know, your potassium levels are this and that,
and I don’t feel good for these
reasons.” And you should be able to
talk to these things, and there
should be an interface that allows us
to communicate, right? And I think
AI is gonna be a huge component
of that interface of allowing us to
talk to these systems. It’s a tool to
combat our mind blindness, to
help us see diverse other…
very unconventional minds
that are all around us.
Can you generalize that? So let’s say
we meet an alien or an unconventional
mind here on Earth. Think of it
as a black box. You show up.
What’s the procedure for
trying to get some hooks
into communication
protocol with the thing?
Yeah. That is exactly the
mission of my lab. It is
to enable us to develop tools to
recognize these things, to learn
to communicate with them, to
ethically relate to them.
And in general, to expand
our ability to do this
in the world around us. I specifically
chose these kinds of things
because they’re not as alien as proper
aliens would be. So we have some hope.
I mean, we’re made of them. We have many things in
common. There’s some hope of understanding them.
You’re talking about
xenobots and anthropods?
Xenobots and anthropods and cells and everything
else. But they’re alien in a couple of
important ways. One is
the space they live in is
very hard for us to imagine. What
space do they live in? Well,
your body, your body cells, long
before we had a brain that was good
for navigating three-dimensional space,
was navigating the space of anatomical
possibilities. It was going from,
you start as an egg, and you
have to become, you know, a snake or
a giraffe or whatever, or a human,
whatever we’re gonna be. And I
specifically am telling you that
specifically am telling you that
this general idea when people model that
with cellular automata type of ideas,
this open loop kind of thing where, well,
everything just follows local rules and
eventually there’s complexity, and here you go.
Now, you’ve got a giraffe or a
human. I’m specifically telling you
that that model is totally insufficient
to grasp what’s actually going on.
What’s actually going on, and there have been
many experiments on this, is that the system is
navigating a space. It is navigating
a space of anatomical possibilities.
If you try to block where it’s going,
it will try to get around you.
with things it’s never seen before, it
will try to come up with a solution.
If you really defeat its ability
to do that, which you can.
You know, they’re not infinitely intelligent,
so you can defeat them. You will either
get birth defects or you will get
creative problem-solving such as
what you’re seeing here with xenobots and
anthropods. If you can’t be a human, you’ll be
you’ll find another way to be in. You can
be an anthropod, for example, or you’ll be
something else.
Just to clarify, what’s the difference
between cellular automata type
of action where you’re just responding to
your local environment and creating some
kind of complex behavior, and operating
in the space of anatomical possibilities?
Sure.
So there’s a kind of goal, I
guess, you’re articulating-
Yes.
There is some kind-
Yes
… of thing. There’s
a will to X something.
The will thing, let’s put that aside-
Okay, sorry.
…because that’s a…
Well, it’s fine too.
There I go, anthropomorph- I just always
love to quote Nietzsche, so there we go.
Yeah. Yeah, yeah. And I’m not saying
that’s wrong. I’m just saying I don’t have
data for that one, but I’ll tell you
the stuff that I’m quite certain of.
There are a couple of different formalisms
that we have in control theory. One of those
formalisms is open-loop complexity. In
other words, I’ve got a bunch of subunits,
like a cellular automaton.
They follow certain rules, and
you turn the crank, time goes forward,
whatever happens, happens. Clearly you can get
complexity from this. Clearly you can get
some very interesting-looking things, right?
So the game of life, all those kinds of
cool things, right? You can get complexity.
No, no, no problem. But
the idea that that model
is going to be sufficient to explain and
control things like
morphogenesis is a hypothesis.
It’s okay to make that hypothesis,
but we know it’s false
despite the fact that that is what
we learned, you know, in basic, uh, cell biology and developmental
biology classes.
When the first time you see something like
this, inevitably, especially if you’re
an engineer in those classes, you go,
“Hey, how does it know to do that?
How does it know, you know, four fingers
instead of seven?” What they tell you is,
“It doesn’t know anything.” Make sure.
That’s very clear. They all insist, like,
when we learn these things, they
insist, “Nothing here knows anything.
There are rules of chemistry, they roll
forward, and this is what happens.”
Okay. Now, that model is testable.
We can ask, “Does that model explain
what happens?” Here’s where that model
falls down. If you have that model
and situations change, either
there’s damage or something in the
environment that’s happened,
those kind of open-loop models
do not adjust to give you
the same goal by different means.
This is William James’ definition of
intelligence: the same goal by different
means. And in particular, working them
backwards, let’s say you are in
regenerative medicine and you say,
“Okay, but this is the situation now. I
want it to be different.” What should
the rules be? It’s not reversible. So the thing with
those kind of open-loop models is they’re not reversible.
You don’t know what to do to make the outcome that
you want. All you know how to do is roll them
forward, right? Now, in biology, we
see the following: If you have a
developmental system and
you put barriers between-
So, I’m going to give you two pieces of
evidence that suggest that there is a goal.
One piece of evidence is that if
you try to block these things from
the outcome that they normally
have, they will do some
amazing things. Sometimes very
clever things, sometimes not at
all the way that they normally do it, right?
So this is William James’ definition.
By different means, by following different
trajectories, they will go around various
local maxima and minima to get to where
they need to go. It is navigation of a
space. It is not blind, turn the crank,
and wherever we end up is where we end up.
That is not what we see experimentally.
And more importantly, I
think, what we’ve shown, and
this is this is something that
I’m particularly happy with in our lab,
over the last 20 years, we’ve shown the
following: We can actually rewrite the
goal states because we found them.
We have shown through our work
on bioelectric imaging and
bioelectric reprogramming, we have
actually shown how those goal memories
are encoded, at least in some cases. We
certainly haven’t got them all, but we have
some. If you can find where
the goal state is encoded,
read it out, and reset it, and the system
will now implement a new goal based
on what you just reset, that is
the ultimate evidence that your goal
directed model is working. Because if
there was no goal, that shouldn’t be
possible. Right? Once you
can find it, read it,
interpret it, and rewrite
it, it means that by any
engineering standard, it means that you’re
dealing with a homeostatic mechanism.
How do you find where the goal’s encoded?
So, through lots and lots of hard work.
The barrier thing is part of that?
Creating barriers and observing?
The barrier thing tells you that
you should be looking for a goal.
So step one when you approach an agentic
system is to create a barrier of different
kinds until you see how
persistent it is at
pursuing the thing it seemed to
have been pursuing originally.
And then you know, okay, cool, this
is a… This thing has agency,
first of all. And then second of all,
you start to build the intuition about
exactly which goal it’s pursuing.
Yes. The first couple of steps are all imagination.
You have to ask yourself, “What space is this
thing even working in?” And you really have
to stretch your mind, because we can’t
imagine all the spaces that systems work
in, right? So, step one is, what space
is it? Step two, what do I think the goal is?
And let’s not mistake step two, you’re not
done. Just because you have made a hypothesis, that
doesn’t mean you can say, “Well, there, I see it
doing this, therefore that’s the goal.” You don’t
know that. You have to actually do experiments.
Now, once you’ve made those hypotheses, now you do the
experiments. You say, “Okay, if I want to block it
from reaching its goal, how do I do that?” And
this, by the way, is exactly the approach we took
with the sorting algorithms and with
everything else. You hypothesize the goal, you
put a barrier in, and then you
get to find out what level of
ingenuity it has. Maybe what you see is, “Well,
that derailed everything, so probably this
thing isn’t very smart.” Or you say, “Oh, wow,
it can go around and do these things.” Or you
might say, “Wow, it’s taking a
completely different approach using its
affordances in novel ways, like that’s
a high level of intelligence.” You will
find out what the answer is.
Another pothead question: Is it
possible to look at, speaking of
unconventional organisms,
and going to Richard Dawkins, for
example, with memes, is it possible to
think of things like ideas? Like,
how weird can we get? Can we look
at ideas as organisms, then
creating barriers for those ideas,
and seeing if the ideas themselves…
If you take the individual ideas
and trying to empathize
and visualize what kind of
space they might be operating in, can they
be seen as organisms that have a mind?
Yeah. Okay, if you want to get
really weird, we can get really
weird here. Think about
the caterpillar-butterfly
transition, okay? So, you’ve got a caterpillar,
a soft-bodied creature, which has a
particular controller that’s suitable
for running a soft body, you know, a
robot. It has a brain for that task,
and then it has to become this
butterfly, a hard-bodied creature that
flies around. Okay. During the process of
metamorphosis, its brain
is basically ripped up and
rebuilt from scratch, right? Now,
what’s been found is that if you
train the caterpillar, so you give it a new
memory, meaning that if the caterpillar
sees this color disc, then it
crawls over and eats some leaves.
Turns out, the butterfly
retains that memory. Now, the
obvious question is, how do you retain
memories when the medium is being
refactored like that? Let’s put that
aside. I’m going to get somewhere even
weirder than that. There’s something else
that’s even more interesting than that.
It’s not just that you have to
retain the memory. You have
to remap that memory onto a
completely new context, because
guess what? The butterfly doesn’t move the way
the caterpillar moves, and it doesn’t care about
leaves. It wants nectar from flowers.
And so, if that memory is going to
survive, it can’t just persist. It has to-
be remapped.
…be remapped into a novel context.
Now, here’s where things get weird.
We can take a couple of different
perspectives here. We can take the
perspective of the caterpillar facing
some sort of crazy singularity
and say, “My God, I’m going to cease
to exist, but, you know, I’ll sort of
be reborn in this new higher-dimensional
world where I’ll fly.” Okay, so
that’s one thing. We can
take the perspective of the
butterfly and say that, “Well, here
I am, but, you know, I seem to be
saddled with some tendencies and some
memories, and I don’t know where the hell they
came from, and I don’t remember exactly
how I got them, and they seem to
be a core part of my psychological
makeup, and, you know, they
come from somewhere. I don’t know where they come
from.” Right? So you can take that perspective.
But there’s a third perspective that I
think is really interesting and useful.
The third perspective is out of the memory
itself. If you take a perspective of the memory,
What is a memory? It is a
pattern. It is an informational
pattern that was continuously
reinforced within one
cognitive system, and
now here I am on this
memory. What do I need to
do to persist into the
future? Well, now I’m facing the
paradox of change. If I try to remain
the same, I’m gone. There’s no way the
butterfly is going to retain me in the
original form that I’m in now.
What I need to do is change,
adapt, and morph. Now, you might
say, “Well, that’s kind of
crazy. How are you taking the perspective
of a pattern within an excitable medium?”
Right? Agents are physical
things. You’re talking about
information, right? So let me tell you
another quick science fiction
story. Imagine that some
creatures come out from the center of the
Earth. They live down in the core. They’re
super dense, okay? They’re incredibly
dense because they live down in the core.
They have gamma ray vision
for… and so on. So they
come out to the surface. What do
they see? Well, all of this stuff
that we’re seeing here, this is like a
thin plasma to them. They are so dense.
None of this is solid to them. They don’t see any
of this stuff. So they’re walking around, you know,
So they’re walking around, you know, because the planet
is, you know, covered in this thin gas, you know.
covered in this, like, thin gas, you know. And
one of them is a scientist, and he’s taking
measurements of the gas, and he says to the others, “You
know, I’ve been watching this gas, and there are, like,
little whirlpools in this gas, and they almost look like
agents. They almost look like they’re doing things.
They’re moving around. They kind of hold themselves together
for a little bit, and they’re trying to make stuff happen.”
and they’re trying to make stuff happen.”
And the others say, “Well, that’s crazy.
say, “Well, that’s crazy. Patterns in a gas
can’t be agents. We’re agents. We’re solid.
We’re solid. This is just patterns in an excitable
medium. And by the way, how long do they hold together?”
He says, “Well, about 100 years.”
“Well, that’s crazy. Nothing…
Nothing… You know, no real agent
can exist to dissipate that fast.
Okay. We are all metabolic patterns, among other
things, right? And so one of the things that…
And so you see what I’m warming up to here. So
one of the things that we’ve been trying to
dissolve, and this is some work that I’ve done with Chris Fields
and others, is this distinction between thoughts and thinkers.
distinction between thoughts and thinkers.
So, all agents are patterns within
some excitable medium. We could talk about
what that is, and they can spawn off others.
and they can spawn off others. And now you
can have a really interesting spectrum.
Here’s the spectrum. You can have
fleeting thoughts, which are like waves
You can have fleeting thoughts, which are like
waves in the ocean when you throw a rock in.
in the ocean when you throw a rock in. You know, they sort
of go through the excitable medium and then they’re gone.
through the excitable medium and then they’re
gone. They pass through and they’re gone, right?
So those are kind of fleeting thoughts. Then you can have patterns
that have a degree of persistence, so they might be hurricanes,
that have a degree of persistence, so they might be
hurricanes or solitons or persistent thoughts or earworms
persistent thoughts or earworms or depressive
thoughts. Those are harder to get rid of.
or depressive thoughts. Those are harder to get
rid of. They stick around for a little while.
They often do a little bit of niche construction, so they change the
actual brain to make it easier to have more of those thoughts, right?
so they change the actual brain to have- to make it easier to have more of
those thoughts, right? Like, that’s a thing. And so they stay around longer.
Like, that’s a thing. And so they- they- they stay
around longer. Now, what’s further than that?
Now, what’s further than that? Well, fragments,
personality fragments of a dissociative
Well, fragments, personality fragments of a
dissociative personality disorder, they’re more stable.
personality disorder, they’re more stable.
And they’re not just on autopilot.
And they’re not just on autopilot. They
have goals and they can do things.
They have goals and they can do things. And then
past that is a full-blown human personality.
And who the hell knows what’s past that? Maybe some sort of
trans-human, you know, transpersonal, like, I don’t know, right?
I don’t know, right? But this idea, again,
I’m back to this notion of a spectrum.
It’s there is not a sharp distinction between, you know,
we are real agents and then we have these thoughts.
you know, we are real agents and then we have these- these-
these thoughts. Yeah, patterns can be agents too, but again,
Yeah, patterns can be agents too, but again,
you don’t know until you do the experiment.
So, if you want to know whether a soliton or a hurricane
or a thought within a cognitive system is its own agent,
system is its own agent, do the
experiment. See what it can do.
Does it- can it learn from experience? Does
it have memories? Does it have goal states?
Does it- Can it learn from experience? Does it have memories? Does it have
goal states? Does it, you know, what can it do, right? Does it have language?
Does it have language? So, coming back to your original question,
yeah, we can definitely apply this methodology to ideas and concepts
yeah, we can definitely apply this methodology to ideas and concepts and
social, uh, you know, whatevers, but you’ve got to do the experiment.
and social, uh, you know, whatevers, but you’ve got to do the
experiment. That’s such a challenging thought experiment of, like,
That’s such a challenging thought experiment of, like, thinking about
memories, from the caterpillar to the butterfly as an organism.
thinking about memories, from the caterpillar to the butterfly as an organism. I
think at the very basic level, intuitively, we think of organisms as hardware.
I think at the very basic level, intuitively,
we think of organisms as hardware.
… and software as not possibly
being able to be organisms, but-
… what you’re saying is that
it’s all just patterns in an
excitable medium, and it
doesn’t really matter
what the pattern is. We
need to… and what the
excitable medium is. We need
to do the testing of what, how
persistent is it? How goal oriented is it?
And there are certain kind of
tests to do that, and you can
apply that to memories. You can apply
that to ideas. You can apply that to
anything, really. I mean, you
could probably think about, like,
consciousness. You
could… there’s really no
boundary to what you can imagine.
Probably really, really wild
things could be minds.
Yeah. Stay tuned. I mean, this is exactly what
we’re doing. We’re getting progressively,
like, more and more unconventional.
I mean, so this whole distinction
between software and hardware,
I think it’s a super important,
concept to think about. And
yet, the way we’ve mapped it
onto the world, I would like
to blow that up in the following
way. And again, I want
to point out what the practical
consequences are because this is not
just, you know, fun stories that
we tell each other. These have
really important research
implications. Think about a Turing
machine. So one thing you can
say is the machine’s the agent.
It has passive data, and it
operates on the data, and that’s it. The
story of agency is the story of whatever that
machine can and can’t do. The data is passive,
and it moves it around. You can tell the opposite
story. You can say, “Look, the patterns on
the data are the agent. The machine is a
stigmergic scratch pad in the
world of the data doing what
data does.” The machine is just the consequences,
the scratch pad of it working itself
out. And both of those stories make sense
depending on what you’re trying to do. Here’s
the biomedical side of things. So our
program in bioelectrics and aging, okay?
One model you could have is the
physical organism is the agent and
the cellular collective has
pattern memories, specifically what I was
saying before, goals, anatomical goals.
If you want to persist for 100
plus years, your cells better
remember what your correct shape is and
where the new cells go, right? So there
are these pattern memories. They exist
during embryogenesis, during regeneration,
during resistance to aging. We can
see them. We can visualize them. One
thing you can imagine is, fine, the
physical body, the cells, are the
agent. The electrical
pattern memories are just
data, and what might happen during
aging is that the data might
get degraded. They might get fuzzy. And
so what we need to do is reinforce the
memories, reinforce the pattern memories.
That’s one specific research program,
and we’re doing that. But
that’s not the only research
program because the other thing
you might imagine is that, what
if the patterns are the
agent in exactly the
same sense as we think in our brains? It’s
the patterns of electrophysiological,
computations, whatever else,
that is the agent, right?
And that what they’re doing in the brain
are the side effects of the patterns
working themselves out. And those side effects
might be to fire off some muscles, glands,
and other things. From that
perspective, maybe what’s actually
happening is, maybe the agent’s finding it
harder and harder to be embodied in the
physical world. Why? Because
the cells might get less
responsive. In other words,
the cells are sluggish. The
patterns are fine. They’re having a harder
time making the cells do what they need to
do, and maybe what you need to do is not
reinforce the memories. Maybe what you need
to do is make the cells more responsive
to them, and that is a different research
agenda, which we are also doing. We have
evidence for that as well, actually now.
We published it recently. So
my point here is, when we tell
these crazy sci-fi stories, the only worth
to them, and the only reason I’m talking
about them now, and a year ago I wasn’t
talking about this stuff, is because these are
now actionable in terms of specific experimental
research agendas that are heading to the
clinic, I hope, in some of
these biomedical approaches. So
now here we can go beyond
this and say, “Okay, up until
now we’ve considered,
what are disease states?”
Well, we know there’s organic disease, something
that’s physically broken. We can see the
tissues breaking down. There’s damage in
the joint, you know, where the liver is
doing what, you know, we can see
these things. But what about
disease states that are not
physical states? They’re
physiological states,
informational states, or cognitive
problems? So in all of these
other spaces, you can start
to ask, what’s a barrier in gene
expression space? What’s a local
minimum that traps you in
physiological state space? And what
is a stress pattern that keeps itself
together, moves around the body, causes
damage, tries to keep itself
going, right? What level of
agency does it have? This
suggests an entirely different
set of approaches to biomedicine.
And, you know, anybody
who’s, let’s say, in the
alternative medicine community is
probably yelling at the screen right now
saying, “We’ve been saying this for hundreds
of years.” And yeah, I’m
well aware these are not
the ideas are not new. What’s new is
being able to now take this and make them
actionable and say, “Yeah, but we can
image this now. I can now actually see the
bioelectric patterns and why they go here
and not there.” And we have the tools
that now hopefully will get
us to therapeutics. So this
is very actionable stuff,
and it all leans on
not assuming we know minds when we see
them, because we don’t, and we have to do
experiments.
To return back to the software-hardware
distinction, you’re saying
that we can see the
software is the organism
and the hardware is just the scratch pad,
or you could see the hardware as the
organism and the software is the thing
that the hardware generates,
and in so doing, we can
decrease the amount of
importance we assign to
something like the human brain, or it
could be the activations, it could be
the electrical signals that are the
organisms, and then the brain is
the scratch pad.
And by saying scratch pad, I don’t mean
it’s not important. When we get to talking
Platonic space, we have to talk about
how important the interface actually
is. The scratchpad isn’t unimportant;
the scratchpad is critical.
It’s just that my only point
is that when we have these
formalisms of software, of hardware,
of other things, the way we map those
formalisms onto the world is not
obvious. It’s not given to us.
We get used to certain things, right?
But who’s the hardware, who’s the
software, who’s the agent and who’s the
excitable medium is is to be determined.
So this is a good place to
talk about the increasingly
radical, weird ideas that you’ve been
writing about. You’ve mentioned it a few
times: the Platonic space.
So there’s this Ingressing Minds paper
where you described the Platonic
space. You mentioned there’s
an asynchronous conference
happening, which is a
fascinating concept because it’s
asynchronous. People are just
contributing asynchronously.
So what happened was this crazy notion,
which I’ll describe momentarily.
I have given a couple talks on it. I then
found a couple papers in the machine
learning community called
the Platonic Representation
Hypothesis, and I said, “That’s pretty
cool. These guys are climbing up to the
same point where I’m getting at it from
biology and philosophy and whatever.
They’re getting there from computer science and machine learning.” We’ll take a couple hours,
I’ll give a talk, they’ll give a talk,
we’ll talk about it. I thought there were
going to be three talks at this thing.
Once I started reaching out to people for
this, everybody sort of
said, “You know, I know
somebody who’s really into this stuff, but
they never talk about it because there’s no
audience for this.” So I reached out to them.
And then they said, “Yeah. Oh, yeah, I know
this mathematician,” or, “I know this, you
know, economist, whatever, who has these
ideas and there’s nowhere we can have her talk
about them.” So I got this whole list and it
became completely obvious that we
can’t do this in a normal… it’s…
We are now booked up through
December. So every week in our
center, somebody gives a talk. We kind of
discuss it. It all goes on this thing.
I’ll give you a link to it, and then
there’s a huge running discussion
after that, and then in the end, we’re all
going to get together for an actual real-time
discussion section and talk about it.
But there’s going to be probably 15 or
so talks about this from all
kinds of disciplines. It’s
blown up in a way that I
didn’t realize how much
undercurrent of these ideas had already
existed that were ready, like now
is the time. And I think… this
is… like, I’ve been thinking about these
things for, I don’t know, 30-plus years.
I never talked about them before
because they weren’t actionable
before. There wasn’t a way to actually
make empirical progress with this
now. You know, this is something that
Pythagoras and Plato and probably many
people before them talked about,
but now we’re to the point
where we can actually do experiments,
and they’re making a difference in our
research program.
You can just look it up: Platonic Space
Conference. There’s a bunch
of different fascinating
talks. Yours first on The Patterns of
Forms and Behavior, Beyond Emergence, then
Radical Platonism and
Radical Empiricism from
Joel Dietz, and Patterns and Explanatory
Gaps In Psychotherapy, Does God Play Dice?
from Alexey Tolchinsky, and so
on. So, let’s talk about it.
What is it? And it’s fascinating that
the origins of some of these ideas
are connected to ML people thinking
about representation space.
representation space.- Yeah. The first thing I want to say
is that while I’m currently calling
it the Platonic space,
I am in no way trying to stick close to
the things that Plato actually thought
about. In fact, to whatever extent we even know
what that is, I think I depart from that in
quite… In some ways, and I’m going to
have to change the name at some point.
The reason I’m using the name now is
because I wanted to be clear about a
particular connection to mathematics,
which a lot of mathematicians would call
themselves Platonists because
what they think they’re doing
is discovering… Not inventing as a human
construction, but discovering a structured
ordered space of truths. Let’s put it
this way: In biology, as in physics,
there’s something very curious that
happens that if you keep asking why,
you keep asking why,
then something interesting goes on.
Let’s… Well, I’ll give you two examples.
First of all, imagine cicadas. The cicadas
come out at 13 years and 17 years, okay?
And so if you’re a biologist
and you say, “So why is that?”
Then you get this explanation for, “Well,
it’s because they’re trying to be off-cycle
from their predators. Because if it was 12
years, then every two year, every three year,
every four year, every six year, a predator
would eat you when you come out, right?
So you say, “Okay, cool. That makes
sense. What’s special about 13 and 17?”
“Oh, they’re prime.” “Uh-huh. And why are they
prime?” Well, now you’re in the math department.
You’re no longer in the biology department.
You’re no longer in the physics department.
You’re now in the math department to
understand why the distribution of primes is
what it is. Another example, and I’m
not a physicist, but what I see is
every time you talk to a physicist and
you say, “Hey, why do the, you know,
leptons do this or that, or the
fermions are doing whatever?”
Eventually, the answer is, “Oh, because
there’s this mathematical, you know,
this SU(8) group or whatever the heck it is,
and it has certain symmetries in these certain
structures.” “Yeah, great. Once again,
you’re in the math department.”
So something interesting happens is that
there are facts that you come across,
many of them are very surprising. You don’t
get to design them. You get more out than
you put in, in a certain way, because
you make very minimal assumptions.
And then certain facts are thrust upon you. For
example, the value of Feigenbaum’s constant,
the value of natural logarithm E. These
things you sort of discover, right?
These things you sort of discover, right?
And the salient fact is this:
if those facts were different,
different, then biology and
physics would be different, right?
They impact instructively, functionally;
they impact the physical world.
impact the physical world. If the distribution of
primes was something else, well then the cicadas would
have been coming out at different times.
But the reverse isn’t true. What I mean is,
there is nothing you can do in the
physical world to change E, as far
as I know, to change E or to change
Feigenbaum’s constant. You could have
swapped out all the constants at the Big Bang,
right? You can change all the different things, you
are not going to change those
things. So- so this, I think
Plato and Pythagoras understood
very clearly, that there is a
set of truths which impact
the physical world,
but they themselves are not
defined by and determined by what
happens in the physical world. You can’t change
them by things you do in the physical world, right?
And so I’ll make a couple claims about
that. One claim is, I think we call
physics those things that are constrained
by those patterns. When you say, “Hey,
why is this the way it is?” Ah, it’s
because this is how symmetry, symmetries
or topology or whatever.
Biology are the things that are
enabled by those. They’re free
lunches. Biology exploits
these kinds of truths. And and
really it enables biology and
evolution to do amazing things without having to
pay for it. I think there’s a lot of free lunches
going on here. And so I
show you a xenobot or an
anthropod, and I say, “Hey, look,
here are some amazing things
they’re doing,” that tissue has never
done before in their history. You
say, first of all, where did
that come from? And when did we
pay the computational cost for it? Because
we know when we pay the computational
cost to design a frog or a human, it was
for the eons that the genome was bashing
against the environment getting selected, right?
So you pay the computational cost of that.
There’s never been any anthropods. There’s never been
any xenobots. When do we pay the computational cost for
designing kinematic self-replication and,
you know, all these things that they’re able
to do? So there’s two
things people say. One is,
“Well, it’s sort of … You
got it at the same time that
they were being selected to be good humans
and good frogs.” Now, the problem with that
is it kind of undermines the point of
evolution. The point of evolutionary
theory was to have a very tight
specificity between what … How you are
now and the history of selection that got you here,
right? The history of environments that got you
to this point. If you say, “Yeah, okay, so
this is what your environmental history
was. And by the way, you got something
completely different. You got these other
skills that you didn’t know about,” that- that’s
really strange, right? And so then what people say
is, “Well, it’s emergent.” And I say,
“What’s that? What does that mean?” And
they say … Besides the fact that you got surprised,
right? Emergence is often just means I didn’t see it
coming. You know, there was something happened.
I- I didn’t know that was going to happen.
So so what does it mean that it’s emergent?
And people say, “Well,” and there are
many emergent things like this. For example,
the fact that gene regulatory networks can do
associative learning. Like, that’s amazing,
and you don’t need evolution for that.
Even random genetic regulatory networks
can do associative learning. I say, “Why-
why- why does that happen?” And they say, “Well,
it’s just a fact that holds in the world.
Just a fact that holds.” So- so
now you have a … You have an
option, and you can go one of two
ways. You can either say, “Okay, look,
I like my sparse ontology. I don’t want to
think about weird platonic spaces. I’m a
physicalist. I want the physical world, nothing
more.” So what we’re going to do is when we come
across these crazy things that are very
specific, like, you know, anthropods have
four specific behaviors that they switch
around. Why- why four? Why not 12? Why not 100?
Like four, why four? When we come across these
things, just like when we come across the
value of E or Feigenbaum’s number or whatever,
what we’re going to do is we’re going to write it
down in our big book of emergence. And that’s
it. We’re just going to have to live with it.
This is what happens. We’re just…
You know, there’s some cool surprises. When we come
across them, we’re going to write them down. Great.
It’s a random grab bag of stuff. And when
we come across them, we’ll write them down.
That’s one. The upside is you get
to be a physicalist, and you get to
keep your, your sparse ontology.
The downside is I find it
incredibly pessimistic
and mysterian because
you’re basically then just
willing to make a catalog
of these, of these amazing patterns.
Why not, instead, and this is
why I started with this
Platonic terminology,
why not do what the
mathematicians already do? A
huge number of them say,
“We are gonna make the same
optimistic assumption that science makes, that
there’s an underlying structure to that latent
space. It’s not a random grab
bag of stuff. There’s a space
to it which, where these patterns come
from, and by studying them systematically,
we can get from one to another. We can map
out the space. We can, we can find out the
relationships between them. We can get an idea
of what’s in that space, and we’re not going to
assume that it’s just random. We’re gonna
assume there’s some kind of structure to it.
And you’ll see all kinds of people, I mean, you know,
well-known mathematicians that talk about this stuff.
You know, Penrose and lots of other people who
will say that, “Yeah, there’s another space
physically, and it has spatial structure.
It has components to it and so
on. We can traverse that space in various
ways.” Uh, and then, and then there’s the
physical space. So I, I
find, I find that much more,
Appealing because it suggests
a research program, which we
are now undergoing in our lab. The research
program is everything that we make,
cells, embryos, robots, biobots, language
models, simple machines,
all of it, they are
interfaces. All physical
things are interfaces to these
patterns. You build an interface, some of
those patterns are going to come through that
interface. Depending on what you build,
some patterns versus others are going
to come through. The research program
is mapping out that relationship
between the physical pointers that we make,
and the patterns that come through it,
right? Understanding the structure of that
space, what exists in that space, and what do I
need to make physically to make certain
patterns come through? Now, when I say
patterns, now we have to ask, “What kinds of things
live in that space?” Well, the mathematicians will
tell you, “Well, we already know. We have a whole
list of objects. You know, the amplituhedrons and
all this crazy stuff that lives in
that space.” Yeah, I think that’s
one layer of stuff that lives in
that space, but I think those
patterns are the lower
agency kinds of things that
are basically studied by
mathematicians. What also
lives in that space are much
more active, more complex,
higher agency patterns that we
recognize as kinds of minds, that
behavioral scientists would look at that
pattern and say, “Well, I know what that is.
That’s the competency for delayed gratification
or problem-solving of certain kinds,” or
whatever. And so, so what I end
up with right now is a model in
which that latent space contains
things that come through physical
objects, so simple, simple patterns,
right? So, so facts about triangles
and, and Fibonacci, you know,
patterns and fractals and things like
that. But also, if you make
more complex interfaces such as
biologicals and, and, but, but importantly,
not just biologicals, but let’s
say cells and embryos and tissues, what
you will then pull down is much more
complex patterns that we say, “Ah,
that’s a mind. That’s a human
mind,” or, “That’s a, you know,
snake mind,” or whatever. So I
I think the mind-brain relationship is exactly
the kind of thing that the math-physics
relationship is, that in some very
interesting way, there are truths of
mathematics that become
embodied, and they kind of haunt
physical objects, right, in a
very specific functional way. And
in the exact same way, there are
other patterns that are much more
complex, higher agency
patterns that basically
inform living things that we
see as obvious embodied minds.
Okay, given how weird and complicated
what you’re describing is, we’ll talk
about it more, but you gotta
ELI5 the basics to a person
who’s never seen this. So again, you
You mentioned things like
pointers. So the physical object
itself or the brain is a pointer to that
Platonic space. What is in
that Platonic space? What is
the Platonic space? What is the
embodiment? What is the pointer?
Yeah, okay. Let’s try it this way. There
are certain facts of mathematics. So the
distribution of prime numbers, right? If you
map them out, they make these nice spirals.
And there’s an image that I often show,
which is a very particular kind of fractal.
And that fractal is the Hally
map, which is pretty awesome
that it actually looks very organic. It looks
very biological. So if you look at that
thing, that image, which
has very specific complex
structure, it’s a map of a
very compact mathematical
object. That formula is like, you know,
Z cubed plus seven. It’s something like
that. That’s it. So now you look
at that structure and you say,
“Where does that actually come from?” It’s
definitely not packed into the Z cubed plus
seven. It’s not, there’s not enough
bits in that to give you all of that.
There’s no fact of physics that determines
this. There’s no evolutionary history.
It’s not like we selected this based on
some, you know, from a larger set over
time. Where does this come
from? Or, or the fact
that… Think about the way that
biology exploits these things.
Imagine a world in which the
highest fitness belonged to
a certain kind of triangle, right? So
evolution cranks a bunch of generations and it gets
the first angle right, then cranks a bunch more
generations, gets a second angle right. Now,
there’s something amazing that happens.
Doesn’t need to look for the third angle because you
already know. If you know two, you get this magical free
gift from geometry that says, “Well, I already know what
the third one should be.” You don’t have to go look for
it. Or as evolution, if you invent a
voltage-gated ion channel, which is
basically a transistor, right, and you
can make a logic gate, then all the truth
tables and the fact that NAND is
special and all these other things,
you don’t have to evolve those things. You get
those for free. You inherit those. Where do all
those things live? These mathematical truths
that you come across that you don’t have any
choice about. You know, once you’ve
committed to certain axioms,
there’s a whole bunch of other stuff
that is now just it is what it
is. And so what I’m saying is,
and this is what, what, what
Pythagoras was saying, I think, that
there is a whole space of these kinds
of truths. Now, he was focused
on mathematical ones, but he
was embodying them in music and in geometry
and in things like that. There are
the space of patterns, and
they make a difference in the
physical world, to machines, to
sound, to things like that. I’m
extending it, and what I’m saying
is, yeah, and so far we’ve
only been looking at the low
agency inhabitants of that…
world. There are other patterns
that we would recognize as kinds of
minds, and that you don’t
see them in this space
until there’s an interface, until there’s a way
for them to come through the physical world.
That interface, the same way that
you have to make a triangular object
before you can actually see the rule
of what you’re going to gain, right?
Out of the rules of geometry and
whatever. Or you have to actually do
the computation on the fractal before
you actually see that pattern.
If you want to see some of those
minds, you have to build an interface,
at least if you’re going to interact with them in
the physical world, the way we normally do science.
As Darwin said, “Mathematicians have their own new
sense, like a different sense than the rest of us.”
So that’s right. You know,
mathematicians can perhaps interact
with these patterns directly in
that space. But for the rest of us,
we have to make interfaces.
And when we make interfaces,
which might be cells, robots,
embryos, or whatever, what we are
pulling down are minds
that are fundamentally not
produced by physics. So I don’t believe that, I don’t know
if we’re going to get into the whole consciousness thing,
but I don’t believe that we create
consciousness, whether we make babies
or whether we make robots. Nobody’s
creating consciousness. What you create
is a physical interface through which
specific patterns, which we call
kinds of minds, are going to
ingress, right? And consciousness
is what it looks like from that
direction looking out into the world.
It’s what we call the view from the
perspective of the Platonic patterns.
Just to clarify, what you’re saying
is a pretty radical idea here.
if there’s a mapping from mathematics to
physics, okay, that’s
understandable, intuitive, as you’ve
described. But what you’re suggesting is
there’s a mapping from some
kind of abstract mind object
to an embodied brain that
we think of as a mind—
as fellow humans. What is that?
What exactly? You said interface.
You’ve also said pointer. So
the brain, and I think you
said somewhere a thin interface.
A thin client. Yeah. The brain—
Thin client.
The brain, a brain is a thin client. Yeah.
Thin client. Okay. So you’re… A brain
is a thin client to this other world.
Can you just lay out very
clearly how radical the idea is?
Sure.
Because you’re kind of dancing
around. I think you could also
point to Donald Hoffman, who
speaks of an interface to a
world. So we only interact with
the “real world” through an
interface. What is the connection here?
Yeah. Okay, a couple of things.
First of all, when you said it makes
sense for physics, I want to show
that it’s not as simple as it sounds.
Because what it means is
that even in Newton’s
boring, classical universe,
long before quantum anything, Newton’s world, physicalism was
already dead. In Newton’s world,
I mean, think about what that
means. This is nuts, because
already he knew perfectly well, I
mean Pythagoras and Plato knew, that
even in a totally classical,
deterministic world,
already you have the
ingression of information
that determines what happens and what’s
possible and what’s not possible in that world
from a space that is itself not
physical. In other words, it’s something
like the natural logarithm E, right?
Nothing in Newton’s world is set to
the value of E. There is nothing you could
do to set the value of E in that world. And
yet that fact that it was that and
not something else governed all sorts
of properties of things that
happened. That classical world was
already haunted by patterns
from outside that world. This,
this should be like… This is,
this is, this is wild. This is not
saying that, “Okay, everything was
cool. Physicalism was great up
until, you know, maybe we got
quantum this, interfaces, or we got,
you know, consciousness or whatever. But originally
it was fine.” No, this is saying that it
was… That worldview
was already impossible
really since… So from
a very long time ago, we
already knew that there are non-physical
properties that matter in the physical world.
This is the chicken or the egg question.
You’re saying Newton’s laws are
creating the physical world?
That is a very deep
follow-on question that
I… we’ll come back to in a minute.
All I was saying about Newton is
that you don’t need quantum
anything. You don’t need to
think about consciousness. You already,
long before you get to any of that, as
Pythagoras, I think, knew, already we have
the idea that this physical world is being
strongly impacted by truths that do not
live in the physical world. And when I say-
Wait. Which truths are we referring to? Are we
talking about Newton’s laws, like mathematical
equations or?
No. Mathematical facts. So for
example, the actual value of E or-
Oh, like very primitive
mathematical facts.
Yeah, yeah. I mean, some of them are, you know…
I mean, if you ask Don Hoffman, there’s this
like, amplituhedron thing that is a set
of mathematical objects that determines
all the scattering amplitudes of the particles
and whatever. They don’t have to be simple.
I mean, the old ones were simple. Now
they’re like crazy. I can’t imagine this
amplituhedron thing, but maybe
they can. But all of these
are mathematical structures
that explain and determine
facts about the physical world, right? If you ask
physicists, “Hey, why this many of this type of
particle?” “Ah, because this mathematical
thing has these symmetries.” That’s why.
So Newton is discovering these
things. He’s not inventing.
This is very controversial, right? And there are
of course physicists and mathematicians who,
who disagree with what
I’m saying, for sure. But
what I’m leaning on is simply
this. I don’t know of anything you
can do in the physical world. At the big…
You’re around at the Big Bang, you get
to set all the constants.
Change set physics however you
want. Can you change E? Can you
change Feigenbaum’s constant?
I don’t think you can.
Is that an obvious statement? I don’t
even know what it means to change the
parameters at the start of the Big Bang.
So physicists do this. They’ll say,
“Okay, you know, if we made the
the ratio between the, you
know, the gravitation and,
would we have matter? Would we…
How many dimensions would we have? Would there
be inflation? Would there be this or that?”
Right? You can imagine playing
with it. There are however many
unitless constants of physics. These
are the kind of like knobs on the
universe that could, in
theory, be different, and
then you’d have different physics, you’d
have different physical properties.
You’re saying that’s not going to change
the axiomatic systems that mathematics has?
What I’m not saying is that every alien
everywhere is going to have the exact same math
that we have. That’s not what I’m claiming.
Although, maybe. But that’s not what I’m claiming.
What I’m saying is, you get more out
than you put in. Once you’ve made a
choice… And maybe some alien somewhere made a
different choice of how they’re going to do their math.
But once you’ve made your choice, then
you get saddled with a whole bunch
of new truths that you discover that you can’t
do anything about. They are given to you
from somewhere. And you can say they’re
random, or you can say, “No, there’s this
space of these facts that they’re pulled from.
There’s a latent space of options that they come
from.” So when you get… So when
your E is exactly 2.718 and so
on, there is nothing you can
do in physics to change it.
And you’re saying that
space is immutable? It’s-
I’m not saying it’s immutable. So I
think Plato may or may not have thought
that these forms are eternal and unchanging.
That’s one place we differ. I actually think
that space has some action to it,
maybe even some computation to it.
But we’re, we’re just pointers. Can this-
Well, so let’s… Okay, so I’ll circle, I’ll
circle back around to that whole thing.
So the only thing I was trying to do is
blow up the idea that we’re cool with
how it works in physics. No problem
there. I don’t… I think that’s a
much bigger deal than people normally
think it is. I think already
there, you have this weird
haunting of the physical world by
patterns that are not coming
from the physical world.
The reason I emphasize this is because
now what I’m going to… when I
amplify this into biology, I don’t
think it sort of jumps as a new
thing. I think it’s just a much
more… I think what we call biology
is our systems that exploit the hell
out of it. I think physics is so
constrained by it, but we call
biology those things that
make use of those kinds of things
and run with it. And so I, again,
I just think it’s a scaling. I don’t think it’s a
brand new thing that happens. I think it’s a scaling,
right? So what I’m saying is
we already know from physics
that there are non-physical
patterns, and these are generally patterns
of form, which is why I call them low
agency, because they’re like fractals that
stand still, and they’re like prime number
distributions. Although there’s a mathematician
that’s talking in our symposium that’s
telling me that actually I’m too chauvinistic
even there. That actually, even those things have
more oomph than even I gave
‘em credit for, which I love.
So what I’m saying is those kind of static
patterns are things that we
typically see in physics,
but they’re not the full extent of what
lives in that space. That space is
also home to some patterns that are
very high agency. And if we give them a
body, if we build a body that
they can inhabit, then we
get to see different behavioral competencies that
the behavior scientists say, “Oh, I know what that
looks like.” That’s this kind of behavioral
you know… This kind of mind or
that kind of mind. In a certain
sense, I mean, yes, what I’m saying
is extremely radical, but it is a very
old idea. It’s an old
idea of a dualistic world
view, right? Where the mind
was not in the physical
body, and that it in some way
interacted with the physical
brain. So, I just want to be clear. I’m not
claiming that this is fundamentally a new idea.
This has been around forever.
However, it’s mostly been
discredited, and it’s
a very unpopular view
nowadays. There are very few people in, for
example, the cognitive science community or
anywhere else in science that like this kind
of view. Primarily, and already Descartes
was getting crap for this when he
first tried it out as this interaction
problem, right? So the idea was, okay,
well, if you have this non-physical mind,
and then you have this brain that presumably obeys
conservation of mass energy and things like that, how are
you supposed to interact with it?
And there are many other problems
there. So what I’m trying to
point out is that, first of all,
physics already had this problem. You didn’t
have to wait until you had biology and
cognitive science to ask about it. And
what I think is happening and the way,
the, the way we need
to think about this is
coming back to my point that I
think the mind-brain relationship
is basically of the same
kind as the math-physics
relationship. The same way that
non-physical facts of physics
haunt physical objects is basically
how I think different kinds
of patterns that we
call kinds of minds are
manifesting through our…
through interfaces like brains.
How do we prove or disprove
the existence of that
world? ‘Cause it’s a pretty radical one.
Because this physical world, we can
poke. It’s there. It feels like all the
incredible things like consciousness
and cognition and all the
goal-oriented behavior and agency all
seems to come from this 3D entity.
Yeah, I mean…
And so like, we can test it. We can
poke it. We can hit it with a stick.
Yeah, sort of.
Makes noises.
Sort of. I mean, so Descartes
got some stuff wrong, I
think. But one thing that he did get right, the
fact that you actually, you don’t know what
you can poke and what you can’t poke. The only
thing you actually know are the contents of your
mind, and everything else
might be… And in fact, what we know from Anil
Seth and Don Hoffman and various other people,
it’s definitely a construct. You
might be on drugs, and you might
wake up tomorrow and say, “My God, I had the
craziest dream of being Lex Fridman.” Amazing.
It’s a nightmare.
Yeah, well… Yeah,
that… Who knows? But-
It’s a ride.
Right? But you see, I… You know, it’s
not clear at all that the
physical poking is your primary
reality. That’s not clear to me at all.
I don’t know. That’s an obvious
thing that a lot of people can
show is true. To take a step to Descartes,
“I think, therefore I am.” That’s the only thing you
know for sure and everything else could be an illusion
or a dream. That’s already a leap. I think
from a basic caveman science
perspective, the repeatable experiment
is the one that most of intelligence
comes from here. The reality’s
exactly as it is. To take
a step towards the Donald
Hoffman worldview takes a
lot of guts and imagination,
and stripping away of the ego and
all these kinds of processes.
I think you can get there more
easily by synthetic bioengineering
in the following sense. Do
you feel a lack of x-ray
perception? Do you feel blind
in the x-ray spectrum or in the
ultraviolet? I mean, you don’t. You
have absolutely no clue that stuff is
there, and all of your reality
as you see it is shaped by your
evolutionary history. It’s shaped by the
cognitive structure that you have, right?
There are tons of stuff going on around
us right now that we, of which we are
completely oblivious. There’s equally
all kinds of other stuff which we
construct, and this is just modern
cognitive science that says that a
lot of what we think is going on is
a total fabrication constructed by
us. So, I think this is not… I don’t
think this is a philo… I mean, Descartes got there from a philosophical point.
That’s not what I’m, that’s not the leap I’m asking
us to make. I’m saying that depending
on your embodiment, depending on your
interface, and this is increasingly gonna
be more relevant as we make
first augmented humans that have
sensory substitution. You’re gonna be walking
around. Your friend’s gonna be like, “Oh, man,
I have this primary perception of the solar
weather and the stock market because I got those
implants.” “And what do you see?” “Well, I see
the, you know, the traffic or the internet through
the, you know, Trans-Pacific Channel.” We’re
all gonna be living in somewhat different
worlds. That’s the first thing. The
second thing is we’re gonna become better
attuned to other beings, whether they be
cells, tissues. You know, what’s
it like to be a cell living in
a 20,000-dimensional
transcriptional space, okay?
To novel beings that have never been
here before that have all kinds of
crazy spaces that they live in,
and that might be AIs. It might be
cyborgs. It might be hybrids. It might
be all sorts of things. So this idea
that we have a consensus
reality here that’s
independent of some very
specifically chosen
aspects of our brain and our interaction.
We’re gonna have to give that up no
matter what to relate
to these other beings.
I think the tension is, absolutely,
and this idea that you’re talking
about, of sort of almost, I think you’ve
termed it, cognitive prosthetics.
which is different ways of
perceiving and interacting with the
world. But I guess the question is, is our
human experience, the direct
human experience, is that just a
slice of the real world, or is
it a pointer to a different
world? That’s what I’m trying to…
…figure out, because the
claim you’re making is a really
fascinating one, a compelling
one. There’s a pretty
strong one, which is there’s
another world into which
our brain is an interface to, which
means you could theoretically
map that world systematically.
Yeah, which is exactly what we’re
trying to do. I mean, we’re-
Right, right, but it’s not
clear that that world exists.
Yeah, yeah, okay. I mean, so that’s the
beautiful part about this, and this is
why I’m talking about this now, whereas
I wasn’t, you know, about a year ago.
Up until a year ago, I was never talking
about this because I think this is now
actionable. So there’s this diagram that’s
called the Map of Mathematics, and they
basically try to show how all
the different pieces of math
link together, and that there’s a bunch of
different versions of it. So there’s two
features to this. One
is, what is it a map of?
Well, it’s a map of various truths. It’s
a map of facts that are thrust on you.
You don’t have a choice. Once you’ve picked
some axioms, you just, you know, hear
some surprising facts that are
just going to be given to you.
But the other key thing about this is that
it has a metric. It’s not just a random
heap of facts. They’re all connected to
each other in a particular way. They
literally make a space, and so when
I say it’s a space of patterns, what
I mean is it is not just a random bag
of patterns such that when you have one
pattern, you are no closer to finding any
other pattern. I’m saying that there’s some
kind of a metric to it so that
when you find one, others are
closer to it, and then you can
get there. So that’s the claim.
And obviously, this is… Now, not
everybody buys this and so on. This is one
idea. Now, how do we know that this
exists? Well, I’ll say a couple of
things. If that didn’t exist, what is
that a map of? If there is no space, if
you don’t want to call it a space, that’s
okay, but you can’t get away from the fact
that as a matter of research, there
are patterns that relate to each
other in a particular
way. What, what’s, you
know, well, the final step of calling
it a space is minimal. The bigger,
bigger issue is what the hell is it a map
of then if it’s not a space? So that’s
that’s the first thing. Now, that’s how it
plays out, I think, in math and physics.
Now in biology, here’s how we’re going
to know if this makes any sense.
What we are doing now is trying
to map out that space by saying,
“Look, we took… We know
that the frog genome
maps to one thing and that’s a
frog.” It turns out that exact
same genome, if you just, if you just
take the slightest step with the
exact same genome, but you just take
some cells out of their environment,
they can also make xenobots with very
specific different transcriptomes,
very specific behaviors, very specific
shapes. It’s not just, “Oh, well,
you know, they do whatever.” They have
very specific behaviors, just like
the frog had very specific properties.
We can start to map out what all those
are and basically try to draw the
latent space from which those things
are pulled. And one of two things
is going to happen in the future,
so this is, you know, come back in 20
years and we’ll see how this worked out.
One thing that could happen is that
we’re going to see, “Oh, yeah,
just like the map of mathematics,
we made a map of the space.
And we know now that if I want a
system that acts like this and this,
here’s the kind of body I need to make
for it, because those are the patterns
that exist. The Anthrobots have four
different behaviors, not seven and not one.
And so, that’s what I can pull
from. These are the options I have.
Is it possible that
there’s varying degrees of
grandeur to the space that
you’re thinking about mapping?
Meaning, it could be just like with
the space of mathematics, might
it strictly be just the space of
biology, or is this a space of, like,
minds, which feels like it could
encompass a lot more than just biology?
Yeah. And I don’t see
how it would be separate
because I’m not just talking
about an anatomical shape and
transcriptional profile. I’m
also talking about behavioral
competencies. So when we make something
and we find out that, okay, it does
habituation, sensitization, it does
not do Pavlovian conditioning,
and it does do delayed gratification,
and it doesn’t have language, that is a
very specific cognitive profile. That’s a
region of that space, and there’s another
region that looks different, because I
don’t make a sharp distinction between
biology and cognition. If you
want to explain behaviors,
they are drawn from some distribution
as well. So I think in 20
years, or however long it’s going to
take, one of two things will happen.
Either we and other people who are
working on this are going to actually
produce a map of that space and
say, “Here’s why you’ve gotten
systems that work like this and like
this and like this, but you’ve never
seen any that work like that.” Or,
we’re going to find out that I’m
wrong, and that basically
it’s not worth calling it
a space, because it is so random
and so jumbled up that there
is, we’ve been able to make
zero progress in linking the
embodiments that we make to the
patterns that come through.
Yeah, just to be clear,
from your blog post on
this from the paper, we’re talking about
a space that includes a lot of stuff.
Yeah, yeah.
It includes human, what is it, meditating?
Steve. “Hello, my name is Steve.”
AI systems, so all those basic
computational systems, objects,
biological systems, concepts. It includes
everything.
Well, it includes specific patterns
that we have given names to.
Right.
Some of those patterns we’ve named
mathematical objects. Some of those
patterns we’ve named anatomical outcomes.
Some of those patterns we’ve named
psychological types.
So every entry in an encyclopedia,
old-school Britannica,
is a pointer to this space.
There is a set of things that I feel
very strongly about because the research
is telling us that’s what’s going on,
and then there’s a bunch of other stuff
that I see as hypotheses for next
steps that guide experiment.
So what I’m about to tell you, these
are things I don’t actually know.
These are just guesses
that you need to make some
guesses to make progress. I
don’t think that there are
specific, or I don’t know, but it
doesn’t mean that there are going to be
specific platonic patterns for, “This
is the Titanic, and this is the
sister of the Titanic, and this is some other
kind of boat.” This is not what I’m saying.
What I’m saying is, in some
way that we absolutely need to
work out when we make minimal interfaces,
we get more than we put in. We get
behaviors. We get shapes. We get
mathematical truths, and we
get all kinds of patterns that
we did not have to create. We didn’t micromanage
them. We didn’t know they were coming.
We didn’t have to put any effort into
making them. They come from some
distribution that seems to exist
that we don’t have to create.
And exactly whether that space
is sparse or dense, I don’t
know. So for example, if there
is a, you know, some kind of
a platonic form for the movie, The
Godfather, if it’s surrounded by a bunch of crappy
versions and then crappier versions still, I
have no idea, right? I don’t know if
the space is sparse or not. I, you
know, I don’t know if it’s finite or
infinite. These are all things I don’t know.
What I do know is that
it seems like physics, and for sure
biology and cognition, are the benefits
of ingressions that are
free lunches in some sense.
We did not make them. Calling them
emergent does nothing for a research
program, okay? That just means you
got surprised. I think it’s much
better if you say, if you make the
optimistic assumption that they come from a
structured space, that we have a
prayer in hell of actually exploring.
And in some decades, if I’m wrong, and it
says, “You know what? We tried. It looks
like it really is random. Too bad.” Fine.
Is there a difference between, like
can we one day prove the existence of
this world? And is there a difference
between it being a really effective model
for connecting things, explaining
things, versus like an actual
place where the information about these
distributions that we’re sampling
actually exists, that we
can hit with a stick?
You… Yeah, you can try
to make that distinction.
But I think, I think modern
cognitive neuroscience will tell
you that whatever you think this is, at
most, it is a very effective
model for predicting the future
experiences you’re going to have.
So all of this that we think about as
physical reality is a nice, convenient model.
I mean, that’s not me. That’s predictive
processing and active inference. That’s modern
neuroscience telling you this,
that this isn’t anything that I’m
particularly coming up with. All
I’m saying is the distinction, the
distinction you’re trying to make, which is
like an old school, realist, you know, kind of
view, that is it metaphorical
or is it real? All we have in
science are metaphors, I think,
and the only question is how good are
your metaphors. And I think as agents
act, living in a world,
all we have are models
of what we are and what the outside world
is. That’s it. And the question is, how
good is it a model? And my
claim about this is in some
small number of decades, this
will either give rise to a
very enabling mapping
of the space for, for
AI, for bio-engineering, for,
you know, biology, whatever.
Or we are going to find out that
it really sucks, because it really
is a random grab bag of stuff, and
we tried the optimistic research program, it failed,
and we’re just going to have to live with surprise.
I mean, I doubt that’s going to
happen, but it’s a possible outcome.
But do you think it’s, there is some place
where the information is stored about
these distributions that are being
sampled through the thin interfaces?
Like actual place?
Place is weird because it isn’t the
same as our physical space-time, okay?
I don’t think it’s that. So calling it
a place is a little, a little weird.
No, but like physics, general
relativity describes a space-time.
Okay.
Could other physics theories
be able to describe this other
space where information is
stored that we can apply, maybe
different, but in the same
spirit, laws about…
Yes
… information?
I definitely think there are going to be
systematic laws. I don’t think they’re going to
look anything like physics. You can call it physics
if you want, but I think it’s going to be so
different that that probably just,
you know, cracks the word. Um,
and whether information is
going to survive that, I’m not
sure. But I definitely think that it’s
going to be, there are going to be
laws. But I think they’re
going to look a lot more like
aspects of psychology and cognitive
science than they’re going to look like
physics. That’s my guess.
So what does it look like
to prove that world exists?
What it looks like is a
successful research program
that explains how you pull
particular patterns when you
need them, and why some patterns
come and others don’t, and show
that they come from an ordered space.
Across a large number of organisms?
Well, it’s not just organisms. I mean,
I think it’s going to end up, and
I mean, you can talk to the machine learning
people about how they got to this point.
Again, because this is not just me.
There’s a bunch of different disciplines
that are converging on this now
simultaneously. You’re going to find
again, just like in mathematics,
where from different
directions everybody sort of is looking at different things.
Say, “Oh my God, this is one underlying structure that
seems to like inform all of
this.” So in physics, in
mathematics, in computer
science, machine learning,
possibly in economics, certainly
in biology, possibly in, you know,
cognitive science, we’re going to find
these structures. It was already obvious
in Pythagoras’ time that there
are these patterns. The only
remaining question is, are they part of an
ordered structured space, and are we up
to the task of mapping out the
relationship between what we
build and the patterns
that come through it?
So from the machine
learning perspective, is it
then the case that even
something as simple as LLMs
are sneaking up onto this world,
that the representations that they
form are sneaking up to it?
When, when… I’ve given
this talk to some audiences,
especially in the organicist
community. People
like the first part where
it’s like, “Okay, now
there’s an idea for what the
magic, quote unquote, is. That’s
special about living
things,” and so on. Now,
if we could just stop there, we would
have dumb machines that just do
what the algorithm says, and we have
these magical living interfaces that can
be the recipient for these ingressions.
Cool, right? We can cut up the world in this
way. Unfortunately or fortunately I
think, that’s not the case.
And I think that even, even
simple minimal computational models
are to some extent beneficiaries of
these free lunches. I think that
the theories we have, and this goes
back to the, to the thin client
interface kind of idea.
The theories we have of
both physics and computation, so theory
of algorithms, you know, Turing machines,
all that good stuff. Those are all good
theories of the front end interface,
and they’re not complete theories of the whole
thing. They capture the front end which is
why they get surprised, which is why
these things are surprising when they
happen. I think that when we see
embryos of different species,
we are pulling from well-trodden
familiar regions of that
space, and we know what to
expect. Frog, you know, snake,
whatever. When we make cyborgs and hybrids
and biobots, we are pulling from new
regions of that space that look a little
weird and they’re unexpected, but you
know, we can still kind of get our, get
our mind around them. When we start
making AIs, proper AIs, we are now
fishing in a region of that space
that may never have had bodies
before. It may have never been
embodied before. And what we get from
that is going to be extremely
surprising. And the final thing
just to mention on thatis that
because of this, because of the inputs from
this platonic space, some of the really
interesting things that
artificial constructs can do are
not because of the algorithm, they’re
in spite of the algorithm. They
are filling up the spaces in between. There’s
what the algorithm is forcing you to do,
and then there’s the other cool stuff it’s
doing which is nowhere in the algorithm.
And if that’s true, and we think
it’s true even of very minimal
systems, then this whole business of- of
of language models and AIs in general,
watching the language part may be a total
red herring because the language is what
we force them to do. The question
is, what- what are, what
else are they doing that we are not, we are
not good at noticing? And this is, you know,
this- this- this is something that
we are I think as a, as a kind of
an existential step for
humanity is to, is to
become better at this because we are not
good at recognizing these things now.
You got to tell me more
about this behavior that is
observable, that is
unrelated to the explicitly
stated goal of a particular algorithm.
So you looked at a simple algorithm of,
Sorting. Can you explain what was done?
Sure. First, just the goal of this study, there
are two things that people generally assume.
One is that we have a
pretty good intuition
about what kind of systems are
gonna have competencies. So
from observing biologicals, we’re not terribly
surprised when biology does interesting
things. Everybody always says, “Well, it’s
biology, you know, of course it does all this cool
stuff.” But do we have these machines?
And the whole point of having
machines and algorithms and so on,
is they do exactly what you tell them to
do, right? And people feel pretty strongly
that that’s a binary distinction,
and that’s how we can
carve up the world in that way. So,
I wanted to do two things. I wanted
to first of all, explore that and
hopefully break the assumption
that we’re good at seeing this, because I
think we’re not. And I think it’s extremely
important that we understand very
soon that we need to get much better
at knowing when to expect these
things. And the other thing
I wanted to do was to find
out, you know, mostly
people assume that you need a
lot of complexity for this.
So when somebody says, “Well, the
capabilities of my mind are not properly
encompassed by the rules of biochemistry,”
everybody’s like, “Yeah, that makes sense.”
Where, you know, you’re very complex
and okay, your mind does things that
you didn’t see coming from the
rules of biochemistry, right?
We know that. So mostly people
think that has to do with
complexity. And what I would like to find out
is, as part of understanding what kind of
interfaces give rise to what kind of
engressions, is it really about complexity?
How much complexity do you actually
need? Is there some threshold
after which this happens? Is it
really specific materials? Is it
biologicals? Is it something about
evolution? Like, what is it about these
kinds of things that allows this
surprise, right? Allows this
idea that we are more than the
sum of our parts. And I had a
strong intuition that none of those
things are actually required, that this
kind of magic, so to speak,
seeps into pretty much
everything. And so to
look at that, I wanted
also to have an example that
had significant shock value.
Because the thing with biology is
there’s always more mechanism to be discovered,
right? There’s infinite depth of what the
materials are doing. Somebody will always say, “Well,
there’s a mechanism for that, you just haven’t found it
yet.” So I wanted an example that
was simple, transparent, so you
could see all the stuff. There was nowhere
to hide. I wanted it to be deterministic,
because I don’t want it to be something
around unpredictability or stochasticity,
and I wanted it to be something
familiar to people, minimal. And I
wanted to use it as a model system for
honing our abilities to take a new
system and looking at it
with fresh eyes. And that’s
because these sorting algorithms
have been studied for over 60 years.
We all think we know what they do and what their
properties are. The algorithm itself is just a few
lines of code, you know? You can
see exactly what’s there. It’s
deterministic. So that’s
why, right? I wanted
the most shock value out of a system like
that, if we were to find anything, and
to use it as an example of taking
something minimal and seeing what can be
gotten out of it. So I’ll describe two
interesting things about it, and then
we have lots of other work coming
in the next year about even
simpler systems. I mean,
it’s actually crazy. Um,
so the very first thing is this.
The standard sorting. So let’s say
bubble sort, right? And all these sorting
algorithms, you know, what you’re starting
out with is an array of
jumbled-up digits, okay? So,
integers. It’s an array of
mixed-up integers, and what the
algorithm is designed
to do is to eventually
arrange them all into order, and what it
does, generally, is compare some pieces
of that array and, based on which one is
larger than which, it swaps them around.
And you can imagine that if you just keep doing
that and you keep comparing and swapping, then
eventually you can get all the digits in the
same order. So, the first thing I decided
to do, and this is the work
of my student Kaining Zhang
and then Adam Goldstein on this
paper, this goes back to our
original discussion about putting a barrier
between it and its goals. And the first
thing I said, “Okay, how do we put a barrier
in?” Well, how about this? The traditional
algorithm
assumes that the hardware is working
correctly. So if you have a seven and then a
five, and you tell them
to swap, the lines that
swap the five and the seven,
and then you go on, you never
check, “Did it swap?” Because
you assume that it’s reliable
hardware, okay? So what
we decided to do was to
break one of the digits so that it doesn’t move.
When you tell it to move, it doesn’t move.
We don’t change the algorithm. That’s really
key. We do not put anything new in the algorithm
that says, “What do you do if the damn
thing didn’t move?” Okay? Just run it
exactly the same way. What happens?
Turns out, something very interesting
happens. It still works.
It still sorts it, but
it eventually sorts it by
moving all the stuff around the
broken number, okay? And that makes sense,
but here’s something interesting. Suppose
we plot, at any given
moment, the degree of
sortedness of the string as a
function of time. If you run the
normal algorithm, it’s guaranteed
to get where it’s going.
That’s it, you know, it’s got to sort,
and it will always reach the end.
But when it encounters one of
the broken digits, what happens
is, the actual sortedness goes down. In
order to then recoup and get better order
later. What it’s able to do is to go
against the thing that it’s trying to do,
to go around in order
to meet its goal later
on. Now, if I showed this to
a behavior scientist, and
I didn’t tell him what system was doing,
they would say, “Well, we know what
this is. This is delayed gratification.”
This is the ability of a system to
go against its gradient and get what
it needs to do. Now, imagine two
magnets. Imagine you take two magnets and you put
a piece of wood between them, and they’re like
this. What the magnet is not
going to do is to go around the
barrier and get to its goal. The two…
They’re not smart enough to go against
their gradient. They’re just going to keep doing
this. Some animals are smart enough, right?
They’ll go around, and… The sorting
algorithm is smart enough to do
that. But the trick is
there are no steps in the
algorithm for doing that. You could stare
at the algorithm all day long. You would
not see that this thing can do delayed
gratification. It isn’t there. Now, there’s two ways
to look at this. On the one hand, you could say this,
or the reductionist physics approach, you could
say, “Did it follow all the steps in the
algorithm?” You say, “Yeah, it did.” Well,
then there’s nothing to see here.
There’s no magic. This is, you
know, it does what it does. It
didn’t disobey the algorithm,
right? I’m not claiming that this is a
miracle. I’m not saying it disobeys the
algorithm. I’m saying it’s not failing to
sort. I’m saying it’s not doing some sort
of, you know, crazy quantum thing. Not
saying any of that. What I’m saying is
other people might call it emergent.
What it has are properties that are not
complexity, not unpredictability,
not perverse instantiation
as in sometimes in ALife. What it has are
unexpected competencies
recognizable by behavioral
scientists, meaning different
types of cognition.
Primitive. We wanted primitive,
so there you go. It’s simple
that you didn’t have to code into the
algorithm. That’s very important. You get more
than you start with, than you put
in. You didn’t have to do that.
You get these surprising behavioral
competencies, not just complexity. That’s the
first thing. The second thing,
which is also crazy, but it
requires a little bit of explanation.
The second thing that we
said is, “Okay, what if instead of in
the typical sorting algorithm, you
have a single controller top-down?” I’m sort
of godlike looking down at the numbers and
I’m swapping them according to the
algorithm. What if, and this goes back to
actually the title of the paper talks
about agential data, self-sorting
algorithms. This is back to like, who’s
the pattern and who’s the agent, right?
You say, “What if we give the numbers a little
bit of agency?” Here’s what we’re going to
we’re not going to have any kind of
top-down sort. Every single number
knows the algorithm, and he’s just going to
do whatever the algorithm says. So if I’m a
five, I’m just going to
execute the algorithm, and the
algorithm will try to make sure that to my
right is the six and to my left is a four.
That’s it. So every digit is, so it’s like
a distributed, you know, it’s like an
ant colony. There is no central planner.
Everybody just does their own algorithm,
okay? We’re just going to do that. Once you’ve done
that, and by the way, one of the values of doing
that is that you can simulate
biological processes because in
biology, you know, if I have like a frog
face and I scramble it with all the
different organs, every, every tissue
is going to rearrange itself so that
ultimately you have, you know, nose, eyes,
head. You’re going to have an order, right?
So you can do that. But, okay, fine,
but you can do something else cool.
Once you’ve done that, you can do something cool
that you can’t do with a standard algorithm.
You can make a chimeric algorithm. What
I mean is not all the cells have to
follow the same algorithm. Some of them might
follow bubble sort, some of them might follow
selection sort. It’s like in biology what
we do when we make chimeras, we make
frogolottles. So frogolottles have some
frog cells, they have some axolotl
cells. What is that going to look like? Does
anybody know what a frogolottle is going to look
like? It’s actually really interesting that despite
all the genetics and the and the developmental
biology, you have the genomes, you have
the frog genome, you have the axolotl
genome. Nobody can tell you what a frogolottle is
going to look like, even though you have, yeah.
This is, this is back to your question about
physics and chemistry. Like, yeah, you
can know everything there is to know about
how, you know, how the physics and the and the
genetics work, but the decision-making,
right? Is like baby frog,
baby axolotls have legs. Tadpoles don’t
have legs. Is a frogolottle going to have
legs, right? Can you predict that
from understanding the physics of
transcription and all of that?
Anyway, so, so we made some,
uh… S- so, so you, you see this as like
an intersection of biology, physics-
…cognition. So we made
chimeric algorithms, and we
said, “Okay, half the digits randomly.” We assigned
them randomly. So half the digits are randomly doing
bubble sort, half the digits are randomly doing,
I don’t know, selection sort or something.
But that… once you choose bubble sort,
that digit is sticking with bubble sort.
It’s sticking. We haven’t done the thing
where they can swap between… no.
But they’re, they’re sticking to it, right?
You label them and they’re sticking to it.
The first thing we learned is that… Well, the
first thing we learned is that distributed sorting
still works. It’s amazing. You don’t
need a central planner when every number
is doing its own thing, still gets sorted.
That’s cool. The second thing we found
is that when you make a chimeric
algorithm where actually the
algorithms are not even matching,
that works too. The thing still gets
sorted. That’s cool. But the most
amazing thing is when we looked at
something that had nothing to do with sorting,
and that is we asked the following question.
We defined… Adam Goldstein actually named
this property, and I think it’s well-named.
We define the algotype of a single cell. It’s not
the genotype, it’s not the phenotype, it’s the
algotype. The algotype is simply this: What
algorithm are you following? Which one are
you? Are you a selection sort or a bubble sort,
right? That’s it. There are two algotypes.
And we simply ask the following
question: “During that process of
sorting, what are the odds that whatever
algotype you are, the guys next
to you are your same type?”
It’s not the same as asking how the numbers are sorted because
it’s got nothing to do with the numbers. It’s actually…
it’s just whatever type you are.
It’s more about clustering than sorting.
Clustering. Well, that’s exactly what we call it.
We call it clustering. And at first, so, so now
think of what happens, and that’s… and you
can see this on that graph, it’s the red.
You start off, the clustering is at
50% because as I told you, we assign
the algotypes randomly. So the odds that the
guy next to you is the same as you is half,
50%, right? Because there are only
two algotypes. In the end, it
is also 50% because the thing
that dominates is actually
the sorting algorithm, and the sorting algorithm doesn’t care
what type you are. You’ve got to get the numbers in order.
So by the time you’re done, you’re
back to random algotypes because you
have to get the numbers sorted. But
in between, in between you get some
amount of increased… very significant, because
look at… look at the control, it’s in the
middle, the pink is in the
middle. In between you get
significant amounts of clustering, meaning
that certain algotypes like to hang
out with their buddies for as long as
they can. Now, now, here’s, here’s the…
one more thing and then I’ll kind of give
the philosophical significance of this. And
so we saw this and I said, “That’s nuts
because the algorithm doesn’t have
any provisions for asking what
algotype am I, what algotype
is my neighbor. If we’re not the same, I’m going
to move to be next to…” Like if you wanted to
implement this, you would have to write a whole bunch
of extra steps. There would have to be a whole bunch of
observations that you would have to take
of your neighbor to see how he’s acting.
Then you would infer what algotype he
is. Then you would go stand next to the
one that seems to have the same algotype as you. You
would have to take a bunch of measurements to say, “Wait,
is that guy doing bubble sort or is he doing selection sort,”
right? Like if you wanted to implement this, it’s a whole
bunch of algorithmic steps. None of that exists
in our algorithm. You don’t have any way of
knowing what algotype you are or what anyone
else is. Okay. We didn’t have to pay for that at
all. So notice a couple of interesting
things. The first interesting thing is
that this was not at all obvious from
the algorithm itself. The algorithm
doesn’t say anything about algotypes.
Second thing is we paid computationally
for all the steps needed to have the
numbers sorted, right? Because we know, you
know, you pay for a certain computation cost.
The clustering was free. We didn’t pay
for that at all. There were no extra
steps. So this gets back to your other question of how
do we know there’s a platonic space, and this is kind
of like one of the craziest things that we’re
doing. I actually suspect we can get free compute
out of it. I suspect that one of
the things that we can do here
is use these ingressions in a useful
way that don’t require you to pay
costs to pay physical costs. Right? Because
we know every bit has an energy cost
that you have to get. The clustering
was free. Nothing extra was done.
This plot, for people who are just
listening, on the X-axis is the
percentage of completion of the
sorting process and the Y-axis
is the sortedness of the listed numbers,
and then also in the red line is basically
the degree to which
they’re clustered. And,
you’re saying that there’s
this unexpected competence
of clustering. And I should
comment that I’m sure
there’s a theoretical computer scientist
listening to this saying, “I can
model exactly what is happening here and
prove that the clustering increases and
decreases.” So taking the
specific instantiation of the
thing you’ve experimented with and
prove certain properties of this.
But the point is that
there’s a more general
pattern here of probably other
things that you haven’t discovered,
unexpected competencies that emerge from this, that
you can get free computation out of this thing.
get free computation out of this thing.
So this goes back to the very first thing
you said about physicists thinking that
physics is enough. You’re 100%
correct that somebody could look at
this and say, “Well, I see exactly
why this is happening. We can
track through the algorithm.” Yeah,
you can. There’s no miracle going on
here, right? The hardware isn’t doing some
crazy thing that it wasn’t supposed to do.
The point is that despite
following the algorithm to do one
thing, it is also at the same time
doing other things that are neither
prescribed nor forbidden by the
algorithm. It’s the space between
chance and necessity, which is how
a lot of people see these things.
It’s that free space. We don’t really
have a good vocabulary for it,
where the interesting things happen. And to
whatever extent it’s doing other things that
are useful, that stuff is
computationally without extra cost.
Now, there’s one other cool
thing about this. And this
is the beginning of a lot of thinking that
I’ve done about this. This relates to
AI and stuff like that:
intrinsic motivations.
The sorting of the digits
is what we forced it
to do. The clustering is an
intrinsic motivation. We didn’t ask
for it. We didn’t expect
it to happen. We didn’t,
we didn’t explicitly forbid it,
but we didn’t, you know, we didn’t
know. This is a great definition of the
intrinsic motivation of a system. So when
people say, “Oh, that’s a machine, it
only does what you programmed it to do.”
as a human have intrinsic
motivation. You know I’m creative
and I have intrinsic motivation. Machines
don’t do that. Even this minimal thing
has a minimal kind of intrinsic
motivation, which is something
that is not forbidden by the
algorithm, but isn’t prescribed
by the algorithm either. And I think that’s
an important, you know, third thing besides
chance and necessity. Something
else that’s fun about this
is when you think about intrinsic
motivations, think about a child.
child. If you make him sit
in math class all day,
you’re never going to know what the other intrinsic
motivations are that he might be doing, right?
Like, who knows what else he might
be interested in. So I wanted
to ask this question. I want to say, if
we let off the pressure on the sorting,
what would happen? Now,
that’s hard because if you
mess with the algorithm, now it’s no longer the
same algorithm, so you don’t want to do that.
So we did something that I think was
kind of clever. We allowed repeat
digits. So if you allow repeat digits
in your array, you can still have
all the fives, can still be after all
the fours and after all the sixes,
but you can keep them as
clustered as you want.
So this thing at the end where they have to
get declustered in order for the sorting to
happen, we thought maybe we could let off the
pressure a little bit. If you do that, all you
do is allow some extra repeat
digits, the clustering gets
bigger. It will cluster as much as
you let it. The clustering is what it
wants to do. The sorting is
what we’re forcing it to
do. And my only point
is, if the bubble sort,
which has been gone over and
gone over how many times, has
these kinds of things that we didn’t
see coming, what about the AIs, the
language model, everything else? Not
because they talk, not because they
say that they’re, you know, have an inner
perspective or any of that, but just from the
fact that this thing is even
the most minimal system
surprises with what happens. And
frankly, when I see this, tell
me if this doesn’t sound like
all of our existential story.
For the brief time that we’re here,
the universe is going to grind us into
dust eventually, but until then,
we get to do some cool stuff
that is intrinsically motivating
to us, that is neither
forbidden by the laws of physics
nor determined by the laws
of physics, but eventually, it
kind of comes to an end. So
I think that aspect of it, right, that,
um, there are spaces. Even
in algorithms, there are
spaces in which you can do other new
things, not just random stuff, not just
complex stuff, but things that are easily
recognizable to a behavior scientist.
You see, that’s the point
here. And I think that kind of
intrinsic motivation is what’s
telling us that this idea that
we can carve up the world, we can
say, “Okay, look, biology is complex.
Cognition, who knows what’s responsible
for that, but at least we can
take a chunk of the world aside and
we can cut it off and we can say,
these are the dumb machines.”
These are just these algorithms…
Whereas we know the rules of
biochemistry don’t explain
everything we want to know about how psychology
is going to go, but at least the rules of
algorithms tell us exactly what the machines
are going to do, right? We have some hope
that we’ve carved off a little part of the world
and everything is nice and simple and it is
exactly what we said it was going to be. I
think that failed. I think it was a good
try. I think we have good
theories of interfaces, but even
even the simplest algorithms
have these kinds of things going
on. And so that’s why I think
something like this is significant.
Do you think that there is
going to be in all kinds
of systems of varying complexity
things that the system wants to
do and things that it’s
forced to do? So, are
there these unexpected competencies to be
discovered in basically all
algorithms and all systems?
That’s my suspicion, and I think that
it is extremely important for us as
humans to have a research program
to learn to recognize and predict.
We make things… Never mind something
as simple as this. We make, you know,
social structures, financial
structures, Internet of Things,
robotics, AI, so we make all this
stuff, and we think that the thing we
make it do is the main show.
And I think it is very
important for us to learn to recognize
the kind of stuff that sneaks
into the spaces.
What if, what… It’s a very counterintuitive
notion. By the way, I like the word
emergent. I hear your
criticism and it’s a really
strong one, that emergent is
like you toss your hands up,
but I don’t know the process, but it’s
just a beautiful word, because it
is… I guess it’s a
synonym for surprising.
And I mean, this is very surprising,
but just because it’s surprising
doesn’t mean there’s not a
mechanism that explains it.
Mechanism and explanation are both not all
they’re cracked up to be in the
sense that, you know, anything
you and I do, we could come up with
the most beautiful theory. We paint a
painting, anything we do.
Somebody could say, “Well, I was
watching the biochemistry
and the Schrodinger
equation playing out,
and it totally described
everything that was happening. You didn’t
break even a single law of biochemistry.
Nothing to see here, nothing
to see, right?” Like,
okay, you know, consistent with the
low-level rules, you can do the same thing
here. You can look at the machine code and say,
“Yeah, this thing is just executing machine code.”
You can go further and say, “Oh, it’s quantum
foam. It’s just doing the thing that quantum
foam does.”
You’re saying that’s what physicists miss.
Well, and I’m not saying they’re unaware of
that. I mean, they’re generally a pretty
sophisticated bunch. I just think
they’ve picked a level and they’re
going to discover what is to be
seen at that level, which is a lot.
And my point is, the stuff that
the behavior scientists are
interested in shows up at a much
lower level than you think.
How often do you think there’s a misalignment
of this kind between the thing that a
system is forced to do
and what it wants to do? And it’s
particularly… I’m thinking about
various levels of
complexity of AI systems.
So right now, we’ve looked at, like,
five other systems. That’s a small
N, okay? But just looking
at that, I would find it
very surprising if bubble sort was
able to do this, and then there was some
sort of valley of death where nothing
showed up, and then living things.
Like, I can’t imagine that.
say that if something… And we actually have a
system that’s even simpler than this, which is 1D
cellular automata that’s doing some
weird stuff. If these things are to be
found in this kind of simple
system, I mean, they just
have to be showing up in these
other more complex AIs and things
like that. The only thing we don’t
know, but we’re going to find
out, is to what extent
there is interaction
between these. So I call these things
side quests, you know. It’s like, like in
a game, you know, with the main
thing you’re supposed to do.
And as long as you still do it, the
thing about this is you have to sort.
You have to sort. There’s no miracle.
You’re going to sort. But as long as
you can do other stuff while you’re
sorting, it’s not forbidden.
And what we don’t know is, to what extent
are the two things linked? So if you do have
a system that’s very good at language, are the
others, the side quests that it’s capable of,
do they have anything to do with language
whatsoever? We don’t know the answer
to that. The answer might be no,
in which case all of the stuff that
we’ve been saying about language models
because of what they’re saying, all of
that could be a total red herring and not
really important, and the really exciting
stuff is what we never looked for.
Or in complex systems, maybe those things
become linked. In biology, they’re
linked. In biology, evolution makes sure
that the things you’re capable of have
a lot to do with what you’ve actually
been selected for. In these things,
I don’t know, and so we might find out that
they actually do give the language some
sort of leg up, or we might
find that the language is just
you know, that’s not the interesting part.
Also, it is an interesting
question of this intrinsic
motivation of clustering. Is this a
property of the particular
sorting algorithms? Is
this a property of all
sorting algorithms? Is
this a property of all algorithms
operating on lists, on
numbers? How big is this? So
for example, with LLMs, is it
a property of any algorithm
that’s trying to model
language, or is it very specific to
transformers and that’s all to be discovered?
We’re doing all that. We’re testing this stuff
in other algorithms. We’re looking for…
We’re developing suites of code
to look for other properties.
We, you know, to some extent, it’s very
hard because we don’t know what to look
for, but we do have a behaviorist
handbook which tells you all kinds of
things to look for: the delayed
gratification, the problem
solving, like, we have all that. I’ll
tell you an N of one of an interesting
biological intrinsic
motivation, because people…
So in the alignment community and stuff,
there’s a lot of discussion about
what are the intrinsic motivations going to be of AIs?
What are their goals going to be, right? What are they
going to want to do? Just
as an N of one observation,
anthrobots, the very first thing
we checked for… So this is not
experiment number 972 out of a thousand
things. This is the very first thing we
checked for. We put them on a plate of
neurons with a big wound through them, a big
scratch. First thing they did was
heal the wound, okay? So it’s an N of
one, but I like the fact that the
first intrinsic motivation that we
noticed out of that system was benevolent
and healing. I thought that was pretty cool.
And we don’t know. Maybe the next 20 things we
find are going to be some sort of, you know,
damaging effects. I can’t tell you
that. But the first thing that we saw
was kind of a positive one. And I
don’t know, that makes me feel better.
What was the thing you mentioned with the
anthrobots that they can reverse aging?
There’s a procedure called
an epigenetic clock where
what you can do is look at
particular epigenetic states of
cells and compare to a
curve that was built
from humans of known age. You
can guess what the age is. Okay?
So we can take now, and this is
Steve Horvath’s work, and many other
people, that when you take a set of cells,
you can guess what their biological age is.
Okay? So we make the anthrobots from
cells that we get from human
tracheal epithelium. We
collaborated with Steve’s group, the
Clock Foundation. We sent them a
bunch of cells and we saw that if
you check the anthrobots themselves,
they are roughly 20% younger than
the cells they come from. That’s
that’s amazing, and I can
give you a theory of why
that happens, although we’re still
investigating. And then I could tell you the
implications for longevity and things
like that. My theory for why it
happens, I call this age evidencing.
And I think that what’s happening here,
like with a lot of biology, is
that cells have to update their
priors based on experience. And
so I think that they come from
an old body. They have a lot of priors about
how many years they’ve been around and all
that, but their new environment
screams, “I’m an embryo,” basically.
There are no other cells around. You’re being bent
into a pretzel. They actually express some embryonic
genes. They say, “You’re
an embryo.” And I think
it doesn’t… It’s not enough new
evidence to roll them all the way
back, but it’s enough to
update them to about 28% back.
Yeah, so it’s similar to, like when
older adult gives birth to a child.
You’re saying you could just
fake it till you make it with
with age? Like, the environment
convinces the cell that it’s young?
Well, first of all, yeah, yes.
And that’s that’s my hypothesis.
That’s nice
And we have a whole bunch of research being
done on this. There was a study where they
went into an old age home
and they redid the décor,
like ’60s style, when all these
folks were really young. And they
they found all kinds of improvements in
blood chemistry and stuff like that,
because they say it was sort of mentally taking
them back to when, you know, when they were
the way they were at that time. I
think this is a basal version of that,
that basically if you’re finding
yourself in an embryonic
environment, what’s more
plausible, that you’re young or
or what? You know, like,
I think this is the basic
feature of biology, is to update
priors based on experience.
Do you think that’s
actually actionable for
longevity? Like, you can convince cells
that they’re younger and thereby extend
their lifespan?
This is what we’re trying to do, yeah.
Could it be as simple as that?
Well, that’s not simple.
That is in no way simple.
But because again you have
to… All of this, all of the
regenerative medicine stuff that
we do balances on one key thing,
which is learning to communicate to the
system. We have to… If you’re going to
convince that system… You know,
so when we make gut tissue into an
eye, you have to convince those cells
that their priors about, “We are gut
precursors,” those priors are wrong
and you should adopt this new
worldview that you’re going to be, you know,
you’re going to be an eye. So being convincing
and figuring out what kind
of messages are convincing
to cells and how to speak the language
and how to make them take on new,
new beliefs, literally, is at the root of
all of these future advances in birth
defects and regenerative medicine and
cancer. And that’s what’s going on here.
So I’m not saying it’s simple, but I can
see the path.
Going back to the Platonic
space, I have to ask if
if our brains are indeed
thin client interfaces
to that space, what does that mean for
our mind? Like, can we upload the mind?
Can we copy it? Can we ship it over
to other planets? What does that mean
for exactly where the mind is stored?
Yeah. Couple of things. So we
are now beyond anything that
I can say with any certainty. This is total
conjecture. Okay? Because we don’t know
yet. The whole point of this is we actually don’t really
understand very well the relationship between the
interface and the thing.
And the thing you’re currently
working on is to map-
Correct.
this space?
Correct. And we are beginning to map
it, but, you know, this is a massive
effort. So a couple of
conjectures here. One is that I
strongly suspect that the majority
of what we think of as the mind
is the pattern in that
space. Okay? And one
of the interesting predictions from that
model, which is not a prediction of modern
neuroscience, is that
there should be cases
where there is very minimal brain, and yet
normal IQ function. This has been seen
clinically. Corrina Kaufman and I reviewed
this in a paper recently, a bunch of
cases of humans where there’s very little
brain tissue, and they have normal
or, and sometimes above normal
intelligence. Now, things are not
simple because that obviously doesn’t happen
all the time, right? Most of the time it
doesn’t happen. So, what’s going on?
We don’t understand. But it is a very
curious thing that is not a prediction
of… I’m not saying, I’m not saying
it can’t… You know, you can take modern
neuroscience and sort of bend it into
a pretzel to accommodate it. You can say,
“Well, there are these, you know, kind of
redundancies and things like this,” right? So
you can accommodate it, but it doesn’t predict
this. So there are these incredibly curious
cases. Now, do I think you can copy
it? No. I don’t think you
can, because what you’re
going to be copying is the interface,
the front end. The brain or
the, you know, whatever. The action
is actually the pattern in the
Platonic space. Are you going to be able to
copy that? I doubt it. But what you could
do is produce another interface
through which that particular
pattern is going to come through. I think
that’s probably possible. I can’t say
anything about… At this point,
about what that would take, but my
guess is that that’s possible.
Is your guess, your gut, that that
process, if possible, is different than
copying? Like, it looks more like
creating a new thing versus copying.
for the interface. So if
you could… So, here’s my
prediction for Star Trek transporter.
For whatever reason, right
now, your brain and body are very
attuned and attractive to a particular
pattern, which is your set of
psychological propensities. If we could rebuild
that exact same thing somewhere else, I
rebuild that exact same
thing somewhere else, I
don’t see any reason why that same pattern
wouldn’t come through it the same way it comes
through this one. That’s… That would
be a guess, you know? So, I think what
you will be copying is the
physical interface, and
hoping to maintain whatever it is about
that interface that was appropriate
for that pattern. We don’t really
know what that is at this point.
So, when we’ve been talking about mind,
in this particular case it’s the most
important to me because I’m a human.
Does self come along with that?
Does the feeling, like,
this mind belongs to me?
Does that come along with
all minds? The subjective
Not the subjective experience. The
subjective experience is important too,
consciousness. But like, the ownership.
I suspect so, and I think so because of
the way we come into being. So, one of the
things that I should be working on is this
paper called Booting Up the Agent, and
it talks about the very earliest steps
of becoming a being in this world. Kind of
like you can do this for a computer, right?
Before you switch the power
on, it belongs to the domain
of physics, right? It obeys the laws of
physics. You switch the power on, some number
of, what, nanoseconds,
microseconds, I don’t know, later,
you have a thing that, oh look, it’s
taking instructions off the stack and
doing them, right? So now it’s
executing an algorithm. How did you get
from physics to executing an
algorithm? Like, what was happening
during the boot up exactly before
it starts to run code or whatever,
right? And so we can ask
that same question in
biology. What are the earliest
steps of becoming a being?
Yeah, that’s a fascinating question. Through
embryogenesis, at which point is the
Are you booting on? Do you have
a hope of an answer to that?
Well, I think so. I think
so in two ways. The
first thing is just physically what happens.
So, I think that your first task as
a being, and again, I
don’t think this is a
binary thing. I think this is a positive
feedback loop that sort of cranks on
up and up. Your first task as a
being coming into this world is to
tell a very compelling story to your
parts. As a biological, you are
made of agential parts. Those parts
need to be aligned, literally, into a
into a goal they have no comprehension
of. If you’re going to move
through anatomical space by means
of a bunch of cells which only know
physiological and, you know, metabolic
spaces and things like that,
you are going to have to
develop a model and give them,
bend their action space. You’re going
to have to deform their option space
with signals, with behavior
shaping cues, with rewards and
punishments, whatever you
got. Your job as an agent
is ownership of your parts, is
alignment of your parts. I think that
fundamentally is going to
give rise to this ability.
Now, that also means having a boundary
saying, “Okay, this is the stuff I control.
This is me. This other stuff over here is
outside world.” I have to figure out…
You don’t know where that is, by
the way. You have to figure it out.
And in embryogenesis, it’s really cool.
As a grad student, I used to do this
experiment with duck embryos, which are
a flat blastodisc. You can take a needle
and put some scratches into it, and every island
you make, for a while until they heal up,
thinks it’s the only embryo. There’s nothing
else around, so it becomes an embryo.
And eventually you get twins and triplets
and quadruplets and things like that.
But each one of them at the
border, you know, they’re joined.
Well, where do I end and where does
he begin? You have to know what your
borders are. So that action of
aligning your parts and coming
to be this, this, this, uh…
I mean, I’m even going to say it,
this emergence. We just don’t have
a good vocabulary for it. This emergence
of a model that aligns all the parts
is really critical to keep that thing
going. There’s something else that’s really
interesting, and I was thinking
about this in the context of this
question of like, you know, these
beautiful kinds of ideas, you know?
There’s this amazing thing
that we found, and this is
largely the work of Federico
Pagosi in my group. So a couple of
years ago, we saw that
networks of chemicals can
learn. They have five or six different
kinds of learning that they can
do. And so what I asked them to do was
to calculate the causal
emergence of those networks
while they’re learning. And
what I mean by that is this:
If you’re a rat, and you learn
to press a lever and get a
reward, there’s no individual cell that
had both experiences. The cells at
your paw had touched the lever.
The cells in your gut got the delicious
reward. No individual cell has
both experiences. Who owns
that associative memory?
Well, the rat. So that means you
have to be integrated, right?
If you’re going to learn associative memories
from different parts, you have to be an
integrated agent that can do that. And so
we can measure that now with metrics of
causal emergence like fi and things like
that. So we know that in order to learn,
you have to have significant fi.
But I wanted to ask the opposite
question. What does learning do for
your fi level? Does it do anything
for your degree of being an agent that
is more than the sum of its parts?
So we trained the networks, and sure enough,
some of them, not all of them, but some of
them, as you train them, their fi goes up,
okay? And so basically what we were able
to find is that there is this positive
feedback loop between every
time you learn something.
…you become more of an integrated agent. And
every time you do that, it becomes easier to
learn. And so, it’s this-
It’s a virtuous cycle.
It’s a virtuous cycle. It’s an asymmetry
that points upwards for agency and
intelligence. And now back to
our platonic space stuff, where
does that come from? It doesn’t come from
evolution. You don’t need to have any evolution for
this. Evolution will optimize the crap out of it,
for sure. But you don’t need evolution to have
this. It doesn’t come from physics. It
comes from the rules of information, the
causal information theory, and the behavior
of networks. They’re mathematical objects.
It has, it’s, this is not anything that,
that was, you know, was, was
given to you by physics or by a
history of selection. It’s a free gift from
math. And the free, and, and, and those two
free gift, free gifts from
math lock together into a
spiral that I think
causes simultaneously a
rise in intelligence and a rise in
collective agency. And I think that’s
just, you know, that’s been, you know,
just, just amazing to think about.
Well, that free gift from … I think
is extremely useful in biology.
When you have small entities forming
networks, hierarchy that
builds more and more complex
organisms. That’s, that’s obvious. I mean,
this, this speaks to embryogenesis, which
I, which I think is one of the
coolest things in the universe. Um,
and i- in fact you acknowledge
its coolness in ingressing mind’s
paper, writing, quote, “Most
of the big questions of
philosophy are raised by the process of
embryogenesis. Right in
front of our eyes, a single
cell multiplies and
self-assembles into a complex
organism, with order on every scale of
organization and adaptive
behavior. Each of us
takes the same journey across
the Cartesian cut, starting off
as a quiescent human oocyte, a little
blob thought to be well-described
by chemistry and physics.
Gradually, it undergoes metamorphosis
and eventually becomes a
mature human with hopes, dreams, and a
self-reflective metacognition that can
enable it to describe itself as
a, not a machine, that’s more
than its brain, body, and underlying
molecular mechanisms,” and so
on. What, in all of our discussion,
can we say is the clear intuition how
it’s possible to take a leap from a single
cell to a fully functioning organism
full of dreams and hopes and friends
and love and all that kind of
stuff? In everything we’ve been talking
about, which has been a little bit technical,
like how, how do we understand? Because
that’s one of the most magical things the
universe is able to
create, perhaps the most
magical. From simple physics and
chemistry, create this, us two
talking about ourselves.
I think we have to keep in mind
that physics and chemistry are
not real things. They are
lenses that we put on the
world, that, that, they,
they are perspectives where
we say, “We are, for the time
being, for the duration of this
chemistry class or career or whatever,
we are going to put aside all the other
levels, and we’re going to
focus on this one level.” And
that what is fundamentally
going on during that process
is an amazing positive feedback
loop of collective intelligence
for the interface. It’s the
physical interface that is
scaling, it’s the cognitive
light cone that it can
support. So it’s going from a molecular
network. The molecular network can already
do things like Pavlovian conditioning. You
don’t start with zero. When you have a
simple molecular network, you are
already hosting some patterns
from the platonic space that look like
Pavlovian conditioning. You’ve already got that
starting out. That’s just the molecular
network. Then you become a cell,
and then you’re many cells. And
now you’re navigating anatomical
amorphous space, and you’re hosting all
kinds of other patterns. And eventually
you… And I think again, I think there’s then
this is like what, you know, all this stuff that
we’re trying to work out now. There’s
a consistent feedback between
the ingressions you get and the ability
to have new ones, which again, I think
it’s this positive feedback cycle, where
the more of these free gifts you pull
down, they allow you physically to
develop to ways where, “Oh, look,
now, now, now we’re suitable for
more and higher ones.” And this
continuously goes and goes and goes until,
you know, until you’re able to pull down
a full human set of behavioral capacities.
What is the mechanism of such
radical scaling of the cognitive
cone? Is it just this kind of the
same thing that you were talking
about with the network of
chemicals being able to learn?
I’ll give you two mechanisms that
we found. But again, just to be
clear, these are mechanisms of the physical
interface. What we haven’t gotten is
a mature theory of how they map onto
the space, that’s just beginning.
But I will tell you what the
physical side of things look like.
The first one has to do
with stress propagation.
So imagine that you’ve got a bunch
of cells, and there’s a cell down
here that needs to be up there. Okay. All of
these cells are exactly where they need to
go, so they’re happy, their stress
is low. This cell… Now, this…
Now, let’s imagine stress is basically a
It’s a physical implementation
of the error function.
It’s basically the amount of stress, it’s basically
the delta between where you are now and where
you need to be. Not necessarily in physical
position, this could be in anatomical space,
in physiological space, and in
transcriptional space, whatever, right?
It’s just the delta from your set point.
So you’re stressed out, but these guys are
happy, they’re not moving. You can’t get
past them. Now imagine if what you could do,
is you could leak your stress, whatever your stress
molecule is, and the cool thing is that evolution has
actually conserved these highly, so these are
all… And we’re studying all of these things,
they’re highly conserved. If you start leaking
your stress molecules, then all of this stuff
around here is starting
to get stressed out.
When things get stressed, starting to get
stressed out, their temperature in the…
Not physical temperature, but in the sense
of simulated annealing or something, right?
Their plasticity goes up. Because they’re
feeling stress, they need to relieve that
stress, and because all the stress molecules
are the same, they don’t know it’s
not their stress. They are
equally irritated by them as if
it was their own stress, so they become a
little more plastic. They become ready to kind
of, you know, adopt different fates.
You get up to where you’re going,
and then everybody’s stress can drop.
So notice what can happen by a very
simple mechanism: just be leaky
for your own stress. My problems
become your problems, not because you’re
altruistic, not because you actually care about my
problems. There’s no mechanism for you to actually
care about my problems, but just that simple
mechanism means that
faraway regions are now responsive
to the needs of other regions, such
that complex rearrangements and things
like that can happen. It’s an alignment
of everybody to the same goal through this
very dumb, simple stress-sharing thing.
via leaky stress.
Leaky stress, right? So
there’s another one,
which I call memory
anonymization. So, imagine
here are two cells. Imagine
something happens to this cell,
and it sends a signal over to this
cell. Traditionally, you send a
signal over, this cell receives
it. It’s very clear that it came from
outside, so this cell can do many things.
It could ignore it, it could take on the information,
it could just ignore it, it could reinterpret it, it
could do whatever, but it’s very
clear that it came from outside. Now
imagine the kind of thing that
we study, which is called
gap junctions. These are electrical
synapses that could directly
link the internal milieus of two cells. If
something happens to this cell, it gets
let’s say it gets poked, and there’s a
calcium spike or something, that propagates
through the gap junction here,
this cell now has the same
information, but this cell has no idea,
“Wait a minute, was that… is that my
memory, or is that his memory?” Because it’s
the same, right? It’s the same components,
and so what you’re able to
do now is to have a mind
meld. You can have a mind meld between
the two cells where nobody’s quite
sure whose memory it is. When you
share memories like this, it’s
harder to say that I’m separate from you. If
we share the same memories, we are kind of
a… and I don’t mean every single memory,
right? So they still have some identity,
but to a large extent, they have a little bit
of a mind meld, and there’s many complexities
you can lean on top of it. But what
it means is that if you have a large
group of cells, they now have joint
memories of what happened to us, as opposed
to, you know, what happened to you and I know
what happened to me. That enables
a higher cognitive light cone,
because you have greater computational
capacity, you have a greater area of
concern, of things you want to manage. I
don’t just want to manage my tiny, little
memory states because I’m getting your memories.
Now I know I’ve got to manage this whole thing.
So both of these things end up
scaling the size of things you
care about, and that is a major ladder for
cognition is to scale the degree of, you know,
the size of concern that you have.
It’d be fascinating to be able
to engineer that scaling.
Probably applicable to AI systems. How
do you rapidly scale the cognitive cone?
Yeah, yeah. We have some collaborators…
Light cone.
in a company called Softmax that
we’re working with to do some of that
stuff in biology. That’s our cancer
therapeutic, which is that what you see
in cancer literally is cells electrically
disconnect from their neighbors when they
were part of a giant memory that was
working on making a nice organ. Now they
can’t remember any of that. Now they’re just
amoebas, and the rest of the body is
just external environment. And what we
found is if you then
physically reconnect them to
the network, you don’t have to fix the
DNA, you don’t have to kill the cells with
chemo. You can just reconnect them, and
they go back to— because they’re now part
of this larger collective—they go
back to what they were working on.
And so, yeah, I think we can
intervene at that scale.
Let me ask you more
explicitly on the SETI,
the Search for Unconventional Terrestrial
Intelligence. What do you hope to do
there? How do you actually try to find
unconventional intelligence all around
us? First of all, do you think on Earth
there is all kinds of incredible
intelligence we haven’t yet discovered?
I mean, guaranteed. We’ve already seen in our
own bodies, and I don’t just mean that we are
host to a bunch of microbiome
or any of that. I mean, your
cells and—we have all
kinds of footwork on this—
every day they traverse these alien
spaces, 20,000-dimensional spaces,
and other spaces. They solve
problems. I think they suffer when
they fail to meet their goals.
They suffer when they fail to meet
their goals, they have stress
reduction when they meet their goals. These
things are inside of us. They are all around
us. I think that we are, we have an
incredible degree of mind blindness
to all of the very alien kinds
of minds around us. And I think
that, you know, looking for aliens
off Earth is awesome and whatever.
But if we can’t recognize the
ones that are inside our own
bodies, what chance do we
have to really— you know,
to really recognize the
ones that are out there?
Do you think there could
be a measure like IQ for
mind? What would it be?
Not mindedness, but
intelligence that’s broadly
applicable to the unconventional
minds, that’s generalizable
to unconventional minds,
where we could even
quantify, like, “Holy shit,
this discovery is incredible
because it has this IQ”?
Yeah, yes and no. The yes part is that
as we have shown, you can take
existing IQ metrics—I mean,
literally existing kinds of
ways that people use to measure
intelligence of animals and humans and whatever,
and you can apply them to very weird things.
If you have the imagination to
make the interface, you can do it.
And we’ve done it, and
we’ve shown creative—
—problem-solving and all
this kind of stuff. So, yes.
However, we have to be humble about these
things and recognize that all of those IQ metrics that we’ve come up with so far
were derived from an N of one
example of the evolutionary
lineage here on Earth, and so we
are probably missing a lot of them.
So I would say we have plenty to start.
We have so much to start with. We could
keep tens of thousands of
people busy just testing things
now, but we have to be aware that we’re
probably missing a lot of important ones.
Well, what do you think has more interesting,
intelligent, unconventional minds
inside our body, the human body, or like
we were talking off-mic, the Amazon jungle,
like nature, natural systems outside of?
Like the sophisticated biological
systems we’re aware of?
Yeah. We don’t know, because it’s
really hard to do experiments on larger
systems. It’s a lot easier to go
down than it is to go up. But my
suspicion is, you know,
like the Buddhists say,
innumerable sentient beings, I think by the
time you get to that degree of infinity, it
kinda doesn’t matter to compare. I suspect
there are just massive numbers of them.
Yeah, I think it really matters which
kinds of systems are amenable to our
current methods of scientific inquiry.
I mean, I spent quite a lot
of hours just staring at ants
when I was in the Amazon, and
it’s such a mysterious, wonderful
collective intelligence. I don’t
know how amenable it is to
research. I’ve seen some folks
try. You can simulate. You can
but I feel like we’re missing a lot.
I’m sure we are, but one of my favorite
things about that kind of work,
have you seen there’s at least three or
four papers showing that ant colonies
fall for the same visual
illusions that we fall for?
Not the ants, the colonies. So, if you…
The colony together. Yeah.
the colonies. So if you…
lay out food in particular patterns, they’ll do
things like complete lines that aren’t there
and… And like all the same shit
that we fall for, they fall for.
So I don’t think it’s hopeless, but I do think
that we need a lot of work to develop tools.
Do you think all of the tooling that we
develop and the mapping that we’ve been
discussing will help us do the study
part, finding aliens out there?
I think it’s essential. I think
it’s essential. We are so parochial
in what we expect to
find in terms of life,
that we are going to be just
completely missing a lot of
stuff. If we can’t even, if we
can’t even agree on, never mind
definitions of life, but, you know, what’s
actually important. I read a paper recently
where they asked about 65 or so modern
working scientists for a
definition of life. And
we had so many different
definitions across so many
different dimensions. We had to use
AI to make amorphous space out of it.
And there was zero consensus about
what actually is important, you know?
And if we’re not good at recognizing it
here, I just don’t see how we’re going to
be good at recognizing it somewhere else.
So, given how miraculous
life is here on Earth,
it’s clear to me that we have
so much more work to do.
That said, would that be
exciting to you if we find
life on other planets in the
solar system? Like, what would
you do with that information?
Or is that just another
life form that we don’t understand?
I would be very excited about it
because it would give us some
more unconventional
embodiments to think about.
Right? A data point that’s pretty far
away from our existing data points,
at least in this solar system.
So that would be cool.
I’d be very excited about it.
But I must admit that my level
of surprise has been pushed
so high at this point
that it would have to… you know, it
would have to be something really weird
to make me shocked. I mean, the things
that we see every day is just, uh,
I think you’ve mentioned
in a few places that
you wrote that the “Ingressing
Minds” paper is not the weirdest
thing you plan to write. How
weird are you gonna get?
Maybe a better question
is, in which direction
of weirdness do you think
you will go in your
life? In which direction of the weird
Overton window are you going to expand?
Yeah. Well, the background
to this is simply
that I’ve had a lot of weird ideas for many,
many decades, and my general policy is
not to talk about stuff
until it becomes actionable.
And the amazing thing,
I’m really just kind of
shocked is that in my
lifetime, the empirical work,
I really didn’t think we
would get this far. And the
knob, I have this mental knob
of what percentage of the weird
things I think do I actually
say in public, right?
And every few years when the empirical
work moves forward, I sort of turn that
knob a little, right, as we keep going.
So I have no idea if we’ll continue to be that
fortunate or how long I can keep doing this
or however, like, I don’t know.
But just to give you, um,
and just to give you a
direction for it, it’s going to
be in the direction of
what kinds of things
do we need to take seriously
as other beings with which
to relate to. So I’ve already
pushed it, you know, so like, we
knew brainy things, and then we
said, “Well, it’s not just brains.”
And then we said, “Well, it’s not
just…” So, you know, it’s not just
in physical space, and it’s not just
biologicals, and it’s not just complexity.
There are a couple of other
steps to take that I’m pretty
sure are there, but we’re gonna have to
do the actual work to make it actionable
before, you know, before we really
talk about it. So that direction.
I think it’s fair to say you’re one
of the more unconventional humans,
scientists out there. So the interesting
question is, what’s your process of idea
generation? What’s your process of discovery?
You’ve done a lot of really incredibly
interesting, like you said,
actionable, but interesting, out
there ideas that you’ve actually
engineered with Xenobots
and Anthrobots, these kinds
of things. Like, what…
When you go home tonight,
go to the lab, what’s the
process, empty sheet of paper,
when you’re thinking through it?
Well, the mental part is, a lot of it
much like, funny enough, much like
making Xenobots. You know, we make
Xenobots by releasing constraints, right?
We don’t do anything to them. We just
release them from the constraints
they already have, and then we see-
So a lot of it is releasing
the constraints that mentally
have been placed on us. And part of
it is my education has been a little
weird ‘cause I was a computer scientist
first, and only later biology. So by the
time I heard all the biology things that
we typically just take on
board, I was already a little
skeptical and thinking a
little differently. But
a lot of it comes from releasing constraints.
I very specifically think about, okay,
this is what we know. What would
things look like if we were wrong? Or
what would it look like if I was wrong?
What are we missing? What is our worldview
specifically not able to see, right?
Whatever model I have. Or another way I
often think is I’ll take two things that
are considered to be very different
things, and I’ll say, “Let’s just
imagine those as two points on a
continuum.” What does that look like? What
does the middle of that continuum look like?
What’s the symmetry there?
What’s the parameter that I can
you know, what’s the knob I can
turn to get from here to there? So those
kinds of… I look for symmetries a lot.
I’m like, okay, this thing is like that
way, in what way? What’s the fewest
number of things I would have to move to
make this map onto that? Right? So these
are, you know, those are kind of
mental tools. The physical process for
me is basically, I mean, obviously
I’m fortunate to have a lot of discussions
with very smart people. So in my
group, there are some, you know, I’ve hired some amazing
people, so we of course have a lot of discussions
and some stuff comes out
of that. My process is
I do pretty much every
morning, or I’m outside for
sunrise, and I walk
around in nature. There’s
just not really anything
better as inspiration, right?
Than nature. I do photography, and I
find that it’s a good meditative
tool because it keeps your hands and
brain just busy enough. Like, you don’t have to
think too much, but you know, you’re sort of
twiddling and looking and doing some
stuff, and it keeps your brain off of the
linear, like logical, like careful train
of thought enough to release it so
that you can ideate a little
more while your hands are busy.
So it’s not even the thing you’re
photographing, it’s the mechanical process of
doing the photography.
and mentally, right? Because I’m not walking
around thinking, “Okay, let’s see, so
for this experiment, we gotta, you know, I gotta
get this piece of equipment and this…” Like, that
goes away, and it’s like, okay, what’s the
lighting and what’s the… What am I looking
at? And during that time when you’re
not thinking about that other stuff,
then I say, “Well, yeah, I gotta get a… I
got a notebook,” and I’m like, “Look, this is
what we need to do.”
So that kind of stuff.
And the actual idea writing down
stuff, is it notebook? Is it
computer? Are you super
organized, thinking
or is it just like random words
here and there with drawings,
and… Like, what… And also,
what is the space of thoughts you
have in your head? Is this sort of
amorphous, things that
aren’t very clear? Are you
visualizing stuff? Is there
something you can articulate there?
I tend to leave myself
a lot of voicemails.
Because as I’m walking around, I’m like,
“Oh man, this idea,” and so I’ll just call
my office and leave myself a
voicemail for later to transcribe.
Nice.
I don’t have a good enough memory to remember
any of these things, and so what I keep
is a mind map. So I have an
enormous mind map. One piece
of it hangs in my lab so that people can
see, “These are the ideas, this is how they
link together. Here’s everybody’s project.
I’m working on this. How does this attach to
everybody else’s so they can track it?” The thing
that hangs in the lab is about nine feet wide.
It’s a silk sheet, and it’s out
of date within a couple of weeks
of my printing it, because new
stuff keeps moving around. Um,
and then there’s more that
isn’t for anybody else’s
view. But yeah, I try to be very
organized because otherwise,
I forget. So everything is
in the mind map. Things are in manuscripts.
I have something like, right now, probably
163, 62 open manuscripts that are in
the process of being written at various
stages. And when things come up,
I stick them in the right manuscript, in the right
place, so that when I’m finally ready to finalize,
then I’ll put words around it and whatever.
But there are outlines of everything.
So I try-
Ah
…to be organized, because I
can’t… I don’t have the, you know?
So there’s a wide front of
manuscripts of work that’s being
done, and it’s continuously
pushing towards completion,
but you’re not clear
where… what’s going to be
finished when and how and
That-
When is the actual-
That’s… I mean, that’s… Yes,
but that’s just the theoretical,
philosophical stuff. The empirical work
that we’re doing in the lab, I mean, those
are… We know exactly, you know-
It’s more focused. There’s a
specific set of questions.
Like, we know this is, this is, you know,
anthrobot aging. This is limb regeneration.
This is the new cancer paper. This is
whatever. Yeah, those things are very linear.
Where do you think ideas come
from when you’re taking a walk
that eventually materialize
in a voicemail? Where’s
that? What… Is that from
you? You know, a lot of really
some of the most interesting people feel
like they’re channeling from somewhere else.
I mean, I hate to bring up the Platonic
space again, but I mean, if you talk to any
creative, that’s basically what
they’ll tell you, right? And
certainly that’s been my experience,
so I feel like it’s a… The way,
the way it feels to me is a
collaboration. So collaboration
is I need to bust my ass and be prepped
in one… A, to work hard, to be able to
recognize the idea when it comes, and B,
to actually have an outlet for it so that
when it does come, we have a lab and
we have people who can help me do
it, and then we can actually get it out, right?
So that’s my part, is, you know, be, be, be
up at 4:30 AM doing your
thing and be ready for it.
But the other side of the collaboration is
that, yeah, when you do that, like, amazing
ideas come, and, you know, to say that it’s
me I don’t think would be, would be right.
I… you know, I think it’s, it’s
definitely coming from, from other places.
What advice would you give to scientists, PhD
students, grad students, young scientists
that are trying to explore
the space of ideas
given the very unconventional,
non-standard, unique
set of ideas you’ve explored
in your life and career?
Um, let’s see. Well, the first and
most important thing I’ve learned
is not to take too much advice, and
so I don’t like to give too much
advice. But I do have one
technique that I’ve found very
useful, and this isn’t for everybody,
but there’s a specific demographic.
There’s a lot of unconventional
people reach out to me, and I try to
respond and, and help them and so on. This
is a technique that I think is useful for
some people. How do I
describe it? You need to, uh,
it’s the act of bifurcating
your mind, and you need to have
two different regions. One
region is the practical
region of impact. In
other words, how do I get
my idea out into the world so
that other people recognize it?
What should I say? What are people hearing?
What are they able to hear? How do
I pivot it? What parts do I not talk
about? Which journal am I going to
publish this in? Is it time now? Do I
wait two years for this? Like, all the
practical stuff that is all about how
it looks from the outside, right?
All the stuff that I can’t say this
or I should say this differently,
or this is going to freak people out, or
this is odd. You know, this community
wants to hear this so I can pivot it this
way. Like, all that practical stuff.
It’s got to be there; otherwise, you’re not going
to be in a position to follow up any of your ideas.
You’re not going to have a career. You can’t…
you’re not going to have resources to do anything.
But it’s very important that that can’t be the only thing. You need
another part of your mind that ignores all that shit completely,
because this other part of
your mind has to be pure.
It has to be I don’t care what anybody else thinks about
this. I don’t care whether this is publishable, describable.
I don’t care if anybody gets it. I don’t
care if anybody thinks it’s stupid.
This is what I think, and why, and give
it space to, to sort of grow, right?
And if you keep the… if you try to mush them…
If you try to mush them together, I found that
impossible because, because the
practical stuff poisons the other stuff.
If you’re too much on the creative
end, you can be an amazing thinker,
it’s just nothing ever materializes. But
if you’re very practical, it tends to
poison the other stuff because
the more you think about how to
present things so that
other people get it, it, it
constrains and it bends
how you start to think.
And, you know what I tell my students and
others is there’s two kinds
of advice. There’s very
practical, specific things, like somebody
says, “Well, you forgot this control,”
or, “This isn’t the right method,”
or, “You shouldn’t be…”
That stuff is gold, and you should take that very seriously,
and you should use it to improve your craft, right?
And that’s, like, super important. But then there’s
the meta advice where people are like, “That’s
not a good way to think about it.
Don’t work on this. This isn’t…”
This isn’t…” That stuff is garbage.
And even very successful people
often give very constraining, terrible advice.
Like, one of my reviewers in the paper
years ago said… I love this. The Freudian
slip. He said he was going to give me
constrictive criticism, right? And
that’s exactly what he gave me.
That’s funny.
It was constrictive criticism. I was like,
“That’s awesome.” That’s a great typo.
Well, it’s very true. I mean, the
bifurcation of the mind is beautifully
put. I do think some of the most
interesting people I’ve met are sometimes
fall short on the normie side, on the
practical, “How do I… Having the emotional
intelligence of how do I
communicate this with people that
have a very different worldview,
that are more conservative and
more conventional and
more kind of fit into the
norm.” You have to be able to have the
skill to fit in. And then you have to
Again, beautifully put, be able to
shut that off when you go on your own
and think. And having two
skills is very important.
I think a lot of radical thinkers think
that they’re sacrificing something
by learning the skill of fitting
in, but I think if you want to have
impact, if you want ideas to
resonate and actually lead to,
first of all, be able
to build great teams,
that help bring your ideas to life. And
second of all, for your ideas to have
impact, and to scale, and to resonate with
a large number of people,
you have to have that skill.
And those are very different.
Those are very different.
Let me ask a ridiculous question.
You already spoke about it, but
what to you is one of the most
beautiful ideas that you’ve
encountered in your various
explorations? Maybe not just
beautiful, but one that makes you happy
to be a scientist, to be able to be a-
curious human exploring ideas.
I mean, I must say that, you
know, I sometimes think about,
these ingressions from this space
as a kind of steganography, you
know? So steganography is when you
hide data and messages within the bits of
another pattern that don’t matter, right?
bits of another pattern
that don’t matter, right?
And the rule of steganography is you
can’t mess up the main thing, you know?
So if it’s a picture of a cat or whatever,
you got to keep the cat. But if there’s bits
that don’t matter, you can kind of stick
stuff. So I feel like all these ingressions
are a kind of universal steganography, that
there’s these patterns seep into everything,
everywhere they can. And
they’re kind of, they’re
kind of shy, meaning that they’re very
subtle, not invisible. If you work hard,
you can catch them. But they’re not
invisible, but they’re hard to see.
And the fact that I think they also
affect, quote unquote, machines
as much as they certainly affect
living organisms, I think
is incredibly beautiful.
beautiful. And I personally am happy
to be part of that same spectrum, and
the fact that that- that magic is sort of
applicable to everything. A lot
of people find that extremely
disturbing, and that’s some of the hate
mail I get. It’s like, “Yeah, we were with
you on the majesty of life thing until you got
to the fact that machines get it too.” And
now, like, terrible, right?
You’re kind of devaluing the
majesty of life. And I don’t know. The
idea that we’re now catching
these patterns and we’re
able to do meaningful research on
the interfaces and all that is
just, to me, absolutely beautiful. And that-
that it’s all one spectrum, I think to me is- is
amazing. I’m enriched by it.
I agree with you. I think it’s incredibly
beautiful. I lied, there’s an even more
ridiculous question. So,
it- it seems like we are progressing towards
possibly creating a superintelligent
system an AGI, an ASI.
If I had one, gave it to you,
put you in the room, what would
be the first question you ask it? Maybe the
first set of questions? Like, there’s so
many topics that you’ve worked on and
are interested in, what … Is there
like a first question that you really
just, if you can get an answer,
solid answer, what would it be?
I mean, the first thing I would ask is
how much should I even be talking
to you? For sure. Because
it’s not clear to me at all that
getting somebody to tell you
an answer in the long run is
optimal. It’s the difference
between when you’re a kid learning math
and having an older sibling that’ll just-
tell you the answers, right? Like, sometimes
it’s like, “Come on, just give me the answer.
Let’s move on with this cancer
protocol and whatever.” Like, great.
But in the long run, the
process of discovering it
yourself, how much of that
are we willing to give up?
And by getting a final answer,
how much have we missed of
stuff we might’ve found along the way? Now,
I don’t know what. The thing is, I-I…
You know, I don’t think it’s correct
to say, “Don’t do that at all. You
know, take the time in all the blind
alleys.” That may not be optimal
either, but we don’t know what the optimal
is. We don’t know how much we should be
stumbling around versus having
somebody tell us the answer.
That’s actually a brilliant
question to ask AGI then.
It, I mean, if it’s really…
That’s a really…
If it’s really an AGI.
I mean, that’s a good first question.
Yeah, if it’s really an AGI, I’m like, “Tell me what the
balance is. Like, how much should I be talking to you
versus stumbling around in the lab and
making all my own mistakes?” And was it
70/30? You know, 10/90? I don’t
know. So that would be the first…
And then the AGI will say, “You
shouldn’t be talking to me.”
It may well be. It may say, “What the hell
did you make me for in the first place?
You guys are screwed.”
Like, that’s possible. Um,
You know, the second question I would
ask is, “What’s the answer I should
be? What’s the question I should be asking you
that I probably am not smart enough to ask you?”
That’s the other thing I would say.
This is really complicated. That’s
a really, really strong question.
But again, there the answer might be
You wouldn’t understand the question it
proposes, most likely. So I think for-
Me, I would probably, assuming you can
get a lot of questions, I would probably
go for questions where
I would understand the answer. It would uncover
some small mystery that I’m super curious about.
Because if you ask big questions
like you did, which are
really strong questions, I just feel
like I wouldn’t understand the answer.
If you ask it, “What question should I be
asking you?” It would probably say something
like, “What is the shape
of the universe?” And you’re like,
“What? Why is that important?” You-
You would be very confused by the question
it proposes. I would probably want to
It would just be nice for me to
know, straight up, first question,
how many living intelligent alien
civilizations are in the observable universe?
Yeah, that would just be nice.
To know if it is zero or is it a
lot? I just want to know that.
Unfortunately, it might answer. It
might be a Michael Levin answer.
“Give me a”- a Michael Levin answer.
That’s what I was about to say, is that my guess
is it’s going to be exactly the problem you said,
which is it’s going to say, “Oh my God.
I mean, right in this room, you got-”
You know, and like, “Oh, man.”
Yeah, yeah, yeah. Everything
you need to know about alien
civilizations is right here in this room.
In fact, it’s inside your own body.
Just for starters.
Thank you-
… for starters
AGI. Thank you. All right, Michael.
One of my favorite scientists,
one of my favorite humans. Thank you
for everything you do in this world.
Thank you so much.
Truly, truly fascinating work,
and keep going for all of us.
Thank you-
You’re an inspiration.
So much. Thank you so much.
It’s great to see you.
Always a good discussion. Thank
you so much, I appreciate it.
Thank you for this.
Thank you.
Thanks for listening to this conversation
with Michael Levin. To support this podcast,
please check out our sponsors in the
description where you can also find links to
contact me, ask questions,
get feedback, and so on.
And now, let me leave you with
some words from Albert Einstein.
“The most beautiful thing we can
experience is the mysterious. It is the
source of all true art and science.” Thank
you for listening. I hope to see you next
time.