How Social Media and AI Hijack Your Brain

How Social Media and AI Hijack Your Brain

Tristan Harris, founder of the Center for Humane Technology, has set in motion the Time Well Spent movement; a cultural awakening to the underhanded manipulation of our collective awareness through social media platforms and interactive tech. He is leading the conversation in consideration of ethics in the design of technology, especially the tech that pervades our every waking hour. Having previously worked behind the scenes of this attention maximizing industry, Tristan brings to light the maladaptive inner workings of the attention economy, and shows the public how our minds have been hijacked for financial gain, with no responsibilities toward the actual wellbeing of the users. 

The human brain is no match for the flashing, pinging, and beckoning applications which compete with one another for our attention, and gobble up an astounding percentage of the conscious energy of the world. His work is unique and controversial, in that it challenges the ethics of several of most powerful entities in the world. It's time to get real about the use of AI, and the many clever psychological tricks that these platforms use to hook our minds into the screen and placate our impulses to keep us there. Tristan is helping so many people across the world to wake up to our vulnerability to these digital forces, and to the threat that this poses to our sense-making and choice making.



In This Episode We Discussed:

  • The attention economy.

  • Perfectly timed likes and dopamine hits.

  • How our technology is persuading our social psychology. 

  • Billions of users at risk of fractured sensemaking.

  • News feeds with an agenda.

  • Conscious human choices or reactions to anxiety?

  • Steering two billion people’s thoughts.

  • Hyper-normal stimuli.

  • How outrage and conspiracy are better at capturing attention.

  • What does it mean to have a life well lived?

  • The vision of the Center for Humane Technology.

  • Training your attention like a muscle.

  • Passive and active cognition.

  • Manipulation using privileged information.

  • The externalities of maximizing time on screen. 

  • Replacing the platform with a decentralized protocol.

  • How evolutionary instincts are absent from social software spaces.

  • Addiction is super profitable.

  • Power dynamics, sovereignty, and technology.

Show Notes:

0:00     Intro

6:25     An arms race to the bottom of the brain stem.

8:24     The financial incentive behind hooking a user's attention.

12:22   The influence of smartphones on our moment to moment thoughts.

17:42   Sharing a world yet living in completely different reality bubbles.

19:11   How do we make good choices when our sensemaking is damaged?

25:43   Living inside of a menu of available choices, and what it is missing.

27:51   Hijacking the human impulse control center.

34:13   “I'm just going to watch this one video, then I'll get back to work.”

38:00   Social media is pressing on specific vulnerabilities for engagement. 

41:05   Research shows preferential steering toward radicalized and divisive content.

46:27   The neuroplasticity that advantaged homosapien to adapt, also increases our susceptibility to maladaptively encoded environments.

51:14   What do we need from our tech to support life outside of the screen?

52:16   Inquiry into the nature of conspiracy and oversimplified certainties.

59:31   Practical tips on how to assess and develop intelligence. 

1:03:43 The overlooked result of clicking the ‘AGREE’, button.

1:16:20 Network dynamics, the convenience of our primary communication channel.

1:19:17 Could blockchain technology be part of the solution?

1:21:34 Goal-driven as distinct from values-driven design.

1:25:24 Why we are so susceptible to hyper-normal stimuli?

1:32:01 The mission of technology should be to repair the social fabric.

1:35:55 The ethical challenge within persuasion.

Mentioned in This Episode:

TristanHarris.com

Center for Humane Technology

Tristan Harris- Ted Talk- how a handful of tech companies control billions of minds every day

Time Well Spent- YouTube 

Joe Edelman- The ideas we needed yesterday, today.

Joe Edelman- on Human Systems

Jordan Greenhall- Medium 

Jordan Greenhall- Youtube 

Jordan Greenhall- The War on Sensemaking (article) 

Jordan Greenhall- The War on Sensemaking (video)

CivilizationEmerging.com- New Economics part III

Donate to Center for Humane Technology 

About Tristan Harris:

As a former Google design ethicist, Tristan is exclusively focused on the issues of how technology manipulates and steers human thinking and action– how the digital arms race to capture attention threatens the pillars of society – mental wellbeing, the healthy development of children, social community, and democracy (by the natural design of technology platforms rewarding propaganda, fake news, conspiracies and polarization without discrimination).

Complete Episode Transcript:

Daniel:
Hello and welcome to the Collective Insights podcast. I am very excited to have Tristan Harris with us today. Tristan is a very good friend and is doing extraordinarily important and unique work in the world.


[00:01:00]





[00:01:30]
He's really been leading the conversation in Silicon Valley, and now really around the world, in terms of considering ethics in the design of technology and very specifically in the design of media technology, software technology, and also really bringing to the forefront of the awareness of the technology community and really the world at large, including government and policy, et cetera, what some of the risks of media tech in particular are and that when we think about risks of exponential technology, it's not just like Genentech biowarfare, AI, military application type things, but it's the technologies we're already interacting with that have the ability to affect the information ecology and gather data about and send data to tremendous numbers of people with a tremendous degree of sophistication and because of Cambridge Analytica, and Russia, and you know, et cetera with Facebook, this conversation is publicly much more well known today than it was even a year ago.

[00:02:00]
And Tristan has been one of the voices really helping to bring some of the key insights to bear here, but has been working on this for some time. He was the product philosopher at Google. He was working on ethical design at Google and left there to start a project and really a movement called Time Well Spent, that said rather than optimize people's time on site, which might be terrible for their life. How do we optimize people actually having a good life, which means that their time is being well spent.

[00:02:30]
Time Well Spent has evolved and become a big movement and been taken on as a thing to do by Zuckerberg and many organizations and Tristan has founded a organization now called the Center for Humane Technology. So how do we build technology cognizant of all of its effects, the externalities, internalize them and make sure that we're building the world that we want to build?


[00:03:00]
So, Tristan is doing super important work, also a beautiful human, good friend, Tristan thanks for being here today.

Tristan Harris:
It's so good to be here with you Daniel.

Daniel:



[00:03:30]
So I expect many of the people listening will have already come across your work, watched the TED Talk, seen 60 Minutes, something like that. If that's the case, just sit tight for these first few minutes where I ask Tristan to kind of share the foundational teachings again for people who are not already familiar with the story, and then we're going to get into bunch of topics that are really at the core of some of the insights and motivations Tristan has been working with and conversations that he and I have been in for a couple years now, but that are things that even if you followed his work closely you will have more to learn about in this podcast involving hypernormal stimuli, involving exponential tech across many categories regarding trust and fiduciary agreements and information ecology, so a lot of meaningful things to be tuned into.

[00:04:00]
So Tristan, to get started, for people who haven't been thinking about this, Facebook seems like a pretty benign thing. Hop on there, you get to see your friends, do basic social media stuff. Why is that not the whole story?

Tristan Harris:

[00:04:30]
Well, there's so many reasons why that's not the whole story. So where do we begin? Well, at the top of this, I think it's important to understand, what are the goals of the people who make the technology that we use, and are their goals aligned with our goals?





[00:05:00]
I actually started into this conversation by first studying persuasion or magic actually. When I was a kid, I was a magician starting at six years old and it teaches you to see people's choices, quote, unquote, in big air quotes, very, very differently because instead of seeing what people do, and think, and choose as being a product of some kind of sovereign process, it teaches you that there are ways of influencing people's attention, their choices by structuring a menu in a certain way, by using emphasis on certain keywords, by changing the representations people use, by activating certain cues. You can really change how people, quote, unquote, choose and navigate reality.



[00:05:30]
Then when I was later in college at Stanford I studied at the Persuasive Technology Lab that basically taught a lot of young engineering students the principles of persuasive psychology. So you learn Edward Bernays, you learn clicker training for dogs, you learn how do casinos manipulate and shape the choice making environment that gets people to play slot machines, and then marketing and pickup artistry, and there's just this infinite domain encyclopedia of stuff that influences the evolutionary instincts of the human animal, the human social animal.



[00:06:00]
And then my friends in that class were the founders of Instagram, and so what most people don't realize is that it was a sort of body of work that the tech industry pulls upon to figure out, how can we keep people engaged with their products? And so that's where the conversation with Facebook enters.


Instead of seeing it as a neutral tool, the narrative that is so common, or at least, had been common until about a year ago as we've been changing it this last year is that technology's just a tool. It's just a hammer, and it's up to us to choose how we use it. Facebook is also just a tool, it's just a hammer, and it's up to us to choose how do we use it.



[00:06:30]
And the premise is that's not true at all. That behind the screen there's a hundred engineers who know exactly how your psychology works and know exactly when to, you know, dopamine release of that perfectly timed dose of 15 likes from those 20 friends, or those 15 friends, that actually matter to you and know what exact schedule they should dose you with that to keep you sort of hooked on the screen. And so the first important part of the conversation is the way that these products keep people hooked and engaged, or addicted because that sort of sets up the matrix.



[00:07:00]
If you imagine a jack in the back of your head, that's what sets up the jack so it's nice, firm, and sturdy, and then the question on top of that becomes how can you actually start to steer and influence entire populations of thoughts and beliefs, conspiracy theories, anti-vax, Russia disinformation campaigns.






[00:07:30]
All of that gets set up once you first have this first layer of addiction established. I started getting concerned about this back when I joined Google, they acquired our company, and realized that more and more of my friends in the tech industry were actually focused on how do we basically manipulate peoples' psychology to keep them engaged and on screens and not really asking how do we benefit peoples' lives? We can get into all of that I'm sure.


One last thing on that, which relates to some of the resource dynamics that you're so familiar with, is I think for your audience to see that there is this finite resource that we are all drawing upon and we never actually thought to think about conserving or protecting before and that's attention.



[00:08:00]
That attention is a finite environment that is both an environment we can pull from and we put back into for others and there's only so much. Because of the business model of Facebook and Google and YouTube and Twitter and Snapchat are all basically to capture human attention, it turns into this arms race where, as they start to butt up against each other, they have to get more and more aggressive, what's famously called the race to the bottom of the brainstem to hijack human attention, so maybe that sets the stage and we can go anywhere you want.

Daniel:


[00:08:30]
So it's probably obvious to many people but to just construct it all the way in case people haven't thought through the business model and the incentive dynamics, why would a company like Google or YouTube or Facebook want to capture people's attention? Why would they want them to be addicted or hooked? Why would they want to maximize their time on site? What's the incentive or advantage of that?

Tristan Harris:
Well, I mean, Facebook has a stock price and a market value that's something north of five hundred billion dollars and the question is, what is that tied to? What resource is drawn upon to actually pump up that stock price? How do they make their money? So how much have you paid for your Facebook account recently? So, not really anything?


[00:09:00]
So then who's paying for them? Well, the advertiser, which means that our attention is the product that they sell to the advertiser, which means that they're basically motivation is to keep people hooked like a drug dealer and say, "How do I keep people engaged every single day?" They still to this day, even after adopting Time Well Spent, which we'll talk about later, they still measure daily active users as their number one metric, just as a drug dealer might measure how frequently they're able to get people hooked.


[00:09:30]
And so the business model is, I make more money the more time you spend, or the more I know about you because the better I can predict what ads will be matched to you. I also make more money the more users there are because I can sell that audience, the future of that audience, and the growth of that audience to an increasing supply of advertisers.






[00:10:00]
I make more money the more advertisers there are, and the more ad campaigns there are, whether that's Russia or that's some good actor just trying to sell a pair of tennis shoes. And so their incentives are to basically to close their eyes and make this automated system steer two billion people's thoughts and let advertisers pay to access any audience they want without double checking who's doing what. They just want activity because activity generates money.

Daniel:
Now I want people to think about, for point of reference, Tristan says two billion people that Facebook, Google, Amazon, right, the largest digital interface companies and interact with basically the online world and that two billion is scaling very quickly to ... as the whole world is getting online.


[00:10:30]
To just think about population curves for a minute, when we think about the Crusades during the Dark Ages and the propaganda of the Crusades taking over the developed world as we knew it at the time, there were only half a billion people in the entire world, right? And for the whole history of civilization there were only half a billion people capped in the world until the Industrial revolution, which was not that long ago.


So we're talking about four times the entire population of the planet influenced by one company on a mostly daily basis, and it's just a meaningful perspective to keep in mind.

[00:11:00]
Tristan Harris:

One other one we tend to add is Facebook actually has more than 2.2 billion users I think, which is about the number of notional followers of Christianity. YouTube has 1.8 billion users which is about the number of notional followers of Islam.


So if you imagine just the surface area of influence on people's thoughts, it's unprecedented.

Daniel:
[00:11:30]
Okay, so when Islam or Christianity, or the Republican platform or the Democratic platform, or some ideology like that through broadcast medias tried to influence the minds of people in the past, they share a message through a commercial or through whatever it is that lands the same on everybody, right? So tell me why it is that interacting through a platform like Facebook has more power than those did per person at the same number of people?

Tristan Harris:

[00:12:00]
Yeah, this is so incredibly important because the number one objection to this entire argument space that we're walking into is this is nothing new, we've always worried about digital media, we've always worried about newspapers, when the people started reading them on the subway, they're not going to talk to each other, or we always worried about propaganda, we always had this before, therefore, nothing new, just go back to business as usual. So it's really important to understand what's different.


So the first is less to do with Facebook and more to do with the fact that it's on a smart phone. The smart phone form factor means that we're living inside of two billion Truman Shows. Two billion perfectly curated attentional channels that are perfectly curated to whatever interests us.

[00:12:30]
Our apps are our friends, et cetera and from the moment we wake up in the morning and we undo our alarm, to the hundred and fifty times a day that we check our phone through the day as a millennial, to the time we set our alarm when we go to bed at night, or don't set it and just keep playing with our phone until we fall asleep with the phone in our hand like so many people. We are truly, intimately jacked in. This thing is influencing our moment to moment thoughts.



[00:13:00]
Even when you're not looking at the screen, many of the things that you're thinking about now, are very much dictated by the things that you had seen five minutes ago when you were on the screen, whether it's your email or a text message, or something else.


So the first thing is the intimacy, the frequency and the kind of how much we're intimately interwoven with this fabric. We kind of inhabit this environment. This is the first thing.


The second thing is that it actually can persuade our social psychology. Television did not construct our social reality, did not tell us what our friends were doing and how our friends ... where our friends were, what our friends found valuable, whether we were validated.

[00:13:30]
It did it through abstract advertising using abstract people, but it's never before been true in history that when I wake up in the morning and I turn this screen over, I can see photo after photo after photo of evidence that my friends lives are better than my life. I can see photo after photo after photo of my friends having fun without me. That's a new experience for two billion humans animals, especially for the teenage audience. So that's the second aspect is the sort of social construction of reality and social persuasion.


[00:14:00]
The third one is AI, and we'll get into this later, but these systems are actually automated and optimized with the most powerful super computers in the world. Every time you open up Facebook or YouTube, you've just activated a super computer pointed at your brain that's trying to figure out what move can I play to play chess against your mind and to keep you hooked. It's a totally new environment.


Then of course the fourth related to that is that it's personalized, which I sort of said at the beginning. It's a Truman Show. It's perfectly curated to your specific interests. And those four things are different and unique from any other time, radio, television, the Crusades or things like that.

[00:14:30]
Daniel:

Okay, so I want to double click on some of these because they are so important and as they get unpacked they become even more clear. So the first point, form factor, I think everybody gets that. No one had that level of intimacy and continuous reference and push notifications with their newspaper or with a television, or et cetera.



[00:15:00]
With regard to the personalization and the AI part, most people have only seen their own Facebook channel. They have a sense that that's what Facebook looks like, that's the environment, and everybody else's Facebook channel is fairly similar.


It's a very sobering experience when someone looks at the Facebook channel of someone with a very different friend group and political ideology and maybe a different aesthetic, and realizes that it's a completely different universe that has almost nothing in common. Not the same advertisers, not the same people, not the same ... there were so many people, I remember, who thought it was so obvious that everyone in the world supported Standing Rock and it would go through, and anyone who didn't support Standing Rock was a Nazi.

[00:15:30]
Then they saw someone who lived in the Midwest or actually in Dakota or whatever, their Facebook, and it looked like the exact opposite, that the people at Standing Rock were all terrorists and they're like, "What the fuck? I actually thought that I knew the universe I lived in." Right?

Tristan Harris:
Yep.

Daniel:
Just like before the Trump election everyone following the Democratic platform, like Hillary winning was such a guaranteed obvious thing. It wasn't even reasonable to think anything else.

Tristan Harris:
Same thing with Brexit. Yeah, exactly.

Daniel:
And so-

[00:16:00]
Tristan Harris:

One extra thing I want to add to what you're saying which is, you and I could have the same 500 friends, the exact same set of friends and if we opened up Facebook today, we would see two, not just slightly different, but two completely different newsfeeds because based on our previous click history, and the things that we both are interested in, which are very different, we'd actually see completely different newsfeeds.



[00:16:30]
The reason that these sort of filter bubble black hole things exist, is actually, we'll get to it I guess a little bit later, is because of this dynamic of the attention economy, because if Facebook does better in the attention economy compared to YouTube or Twitter, if they personalize a feed to show you that everyone in Standing Rock supports the thing that you care about.


Versus if they showed you a more complex view of reality, actually, reality is way more complicated, people disagree with you. That would not be as compelling. And so Facebook is driven by this win lose game that they make around attention to personalize newsfeeds. It has to do that otherwise it won't win in the attention economy.

[00:17:00]
Daniel:

So what does this do to the information ecology and people's ability to make sense of the world?

Tristan Harris:
So this is what's so dangerous, and I learned a lot of this speaking with you, Daniel, but is that sense making becomes totally fractured. I think one of the most dangerous parts of this is not that people are addicted, that they lost some time, that they're not spending their time the way they want to.



[00:17:30]
The real critique and the danger is that our sense making apparatus is completely fractured into a thousand, or millions of pieces, or billions of pieces where we are living in completely different universes. We believe that we are living in the same universe, and our norms and models of what reality is and how other people believe things, and what facts there are and what facts are not true is completely divided, and we can no longer bridge the common reality anymore. To me that's the most important aspect of this whole system, I think.

Daniel:
[00:18:00]
So I think most people listening will recognize that they have a pretty clear certainty about some things, where they know other people that have a very strong certainty about the exact opposite things and recognize that that at scale across so many issues represents a type of fragmentation and polarization.


We say, "How the fuck does the world get through it?" And that it's more multipolar, more certain, across more axes and then I think most people also recognize that if you think about what are the most important things facing the world that you could know about?

[00:18:30]
Like, how realistic that AI kills everything in the near term future? How likely is it that we're really going to be able to make it on Mars? Where is climate change really at? Is it anthropogenic or not and on what time scale? How long do the coral have before they all die off? What's happening on that continent of plastic in the middle of the ocean? Is Fukushima really releasing radiation like crazy or not?


[00:19:00]
You'll realize, like whoa, those questions are more important than every question I ask and I actually have no idea what the fuck is the case because there's ideas that are presented as pretty certain that are in direct opposition with each other if I pay attention to more than just my feed.






[00:19:30]
So then there's like a, there's just no such thing as truth or fact, but that's of course not true. It's just there's no such thing as my ability to actually find truth or fact. So then, how do we make choices? So if we have a world where technology's extending the potency of our choices, the impact, the leverage of them, while also damaging the sense making to inform the choice, what happens if sense making is going down and choice making is going up simultaneously?


Okay, so coming back to something you said-

Tristan Harris:
I'm glad we're looking at such a pleasant, present reality that everything's going to be fine.

Daniel:
I think it's clear for everyone that we have to actually be able to make sense of the world to make good choices. To the degree that there are places I'd go for sense making that have an agenda other than telling me what is most true, that there's a problem there.

[00:20:00]
So now when you said Facebook is personalized in a way that TV wasn't, how does it personalize to me? I have never taken a psychological profile on Facebook. I have never written a psychological eval and told stories about my childhood. How does it know how to personalize to me?


Because again, it might seem like all of ones engagement on Facebook seems pretty benign.

Tristan Harris:
[00:20:30]
Yeah, well you know, just take your significant other and watch the kind of things that they click on. If your significant other clicks on a lot of cute animals, I have a friend named Max who actually just clicks on lots of cute animals and his feed is totally filled with just cute animals and so without even realizing it, that's what most of his consumption ends up reinforcing.



[00:21:00]
I end up looking at sort of save the world type things and concern about Russia and concern about disinformation, so my feed is basically a general repeating view of the world's falling apart and you should feel helpless about it.


And neither of these things are good and I think the question that Facebook faces now, you know, imagine you're inside the newsfeed team, your job is to fix all this, and your job is to fix all this, not just for Daniel and Tristan from California, but to do this in Myanmar where there's actually a genocide happening because of the amplification of certain fake news there and also in Sri Lanka.


[00:21:30]
These issues are incredibly real and affect, not just people's time or addiction, but actually affect, when that sense making breaks down, entire populations' cultural sort of tensions that can lead to people dying.


And so now, how do you when your newsfeed, you're also enacting exponential consequences, and a team of a few hundred people in a system that you're trying to understand and control, but you're also impacting societies and languages that your engineers don't even speak. How do you do that?

Daniel:
[00:22:00]
Okay, so let's say that I'm going to play devil's advocate and be a representative of Facebook for a minute and I say, "Hey, if you're searching cute animals regularly, and I know that's what you want and there's too much stuff on the internet to be able to find it, we're going to make our algorithms find it and send it to you because we're just trying to serve what it seems that you want. If you want to know shit about Russia, we're going to send that to you. That's just called us helping you do what it seems like you want to do better. What else should we do?"

Tristan Harris:

[00:22:30]
Right. This is so incredibly important because essentially this is the root of the whole thing. It's what is your model of human nature? Is what you observe a human animal doing a reflection of their conscious choices? If we check our phone a hundred and fifty times a day, does that mean that those were a hundred and fifty reflective, conscious, mindful choices? Or are those just a hundred and fifty reactions to anxiety?




[00:23:00]
In a case of Facebook, their model of human choice was if I ask you, "Daniel, what do you want more of in your life, what do you want to be doing?" and you say, "I want to go to the gym." But then every single time you say that, because I have one I just throw a box of donuts in front of you and I just see what happens.


And if every time you go for the donuts instead of go to the gym, my assumption is, when you told me you were going to the gym, that was just a lie because your revealed preference is what you really wanted, is you wanted the donuts. And this is literally the philosophical model that I know for a fact was governing Facebook for the last decade.



[00:23:30]
We try to give people the nutritious content or something like that, but every single time they go for the cat videos. And so, what are we supposed to do? Make people read the New York Times? And what this really comes down to is what are the factors, like a magician, that you could spot that influences the choices that people make?


I mean when I said I want to go to the gym and I got a phone call from my friend saying, "Hey, I'm with my friend Susie right now, we're about to go to the gym. Hey, we're going to the one that's like right down the street from you, right now, do you want to come?"




[00:24:00]
I mean if I said I want to go to the gym and that's the first thing that happened, that's the choice that I would make, but because that's not the easiest, sweetest choice on life's menu, and the sweetest choice on life's menu at any given moment with a smart phone is let me run away from my anxiety. Let me see photos of my friends doing stuff that makes me feel bad. Let me find some more slot machine email and see what feels good. I'm going to go for that.





[00:24:30]
And so we need to really, dramatically upgrade our model of what does it mean for a person to choose? If there is such a thing. And to the extent there was a reflective process that happened hundreds of years ago, it's not so much that there was this protected, perfect, sacrosanct thing called choice, but there certainly was a different phenomenological process happening inside of a human animal before we had digital technology and it's working very, very different now when you have something in your pocket that offers slot machine rewards.

Daniel:
Okay, so I want to argue something that someone at Facebook might say again, and see where it goes. So, okay, "Well Tristan, it sounds like you're saying that we should be everyone's parents and say don't eat the donuts go to the gym, do what's healthy for you and that we're responsible for you, and you're not responsible for yourself."

[00:25:00]




[00:25:30]
That actually seems paternalistic and pejorative and not like what capitalism, supply and demand, would have a company realistically do. People are demanding something, we create the supply. If we don't, somebody else will. That's actually just empowering what it is that people seem to actually be wanting to do and it's their personal responsibility to decide what it is they want to do and it's our offering them increased ease and efficiency to do the things that they seem to be wanting to do. So are you basically saying that we should be like big brother and control what they think they should do?

Tristan Harris:
No, it's really a different, more Buddhist insight of everything has a choice architecture. You're always living inside of a menu of choices. Right now we're talking about the menu of choices that a smart phone provides, but when you wake up and your eyes open in the morning for the first time, you're also presented with a menu, which is the set of things that your mind shows you or tells you are available to you to do next when you wake up in the morning.

[00:26:00]
But what this is really missing, this model of choice, is what are the search costs? How far away are the choices that I would need to discover? What awareness do I have of the more choices than I can see? Like, Facebook's presenting a certain menu of choices of some newsfeed content, but it's not presenting choices like, "Hey, your friend Daniel's hanging out next door and do you want to go hang out with him?"

[00:26:30]
They have to have a very different model of attention really as the governing spotlight and tool set through which we're making all of these choices. And when our attention is on a screen and our esophagus is clamped down and we're staring at a phone like this, then we're not breathing very much. That whole phenomenology, that whole pattern, does not give us the kind of free look up at the sky in awe and see what are the wide set of choices available to me next?


[00:27:00]
And so I think of it as like we just kind of collapsed the space of human choice making down into this very small form factor and medium of, here's a bunch of choices that you can make that will keep you on the screen next.


So I think we need a different model of human choice and it's not simply ... in terms of this normative aspect of it being big brother, it's more that we already are big brother. Every system that's deciding a menu by which two billion other people live by has to ask what am I putting on the menu? What am I not putting on the menu? And how is that changing or shaping the outcomes of choices that people make?

[00:27:30]
Daniel:

Okay, so you just said a really key thing, which is, it's not that we're saying that we should start shaping human behavior, we're saying we are inexorably shaping human behavior no matter what we do by creating environments, be they physical or digital environments, that humans respond to their environment.



[00:28:00]
Then it's just taking responsibility for what are the actual statistical causal dynamics that are happening by the environments we create? So we're not saying humans are not responsible for themselves, but we are saying they aren't exclusively responsible for themselves.


Environments that are outside of what our evolutionary capacity equipped us to handle, which is why we don't like drug dealers dealing cocaine to our five year olds, is because we can say well, the five year old should be fully sovereign and responsible to leave the cocaine alone and be able to use cocaine in a responsible way, it's like, well, not really. It's going to fucking hijack their impulse control capacity, but we'll do a smart phone to a five year old.

[00:28:30]
We watch the dopamine hits endogenously that occur from the flashing lights and the blue and the like and whatever it is, and we see the same type of dynamics with cocaine so then we're like, okay, is it really just a benign entertainment device so we don't have to parent?




[00:29:00]
And not just with children but as it extends, because we really don't like the dealers dealing the cocaine to people anywhere. Especially in a way that like we ... and that doesn't mean that the answers make it illegal, but it does mean that the answer is some deeper considerations regarding the effects.

Tristan Harris:
And I think this is going to get to something that I'm sure we're going to talk about more as well, which is the asymmetry between the people who are designing the product and their power and how much they know about you and what would exploit you or cause that something to happen in you, and the power and awareness that you, the subject, have. So I think that this is a persuasive transaction.



[00:29:30]
And we haven't actually talked about some of the other persuasive design techniques that exist. Some of them feel more innocuous, like bottomless bowls, things like autoplay, you know, where your mind depends on the stopping cue. As a magician you're always looking for stopping cues. People are doing an activity and then let's say you're drinking a glass of wine and at some point the glass of wine hits the bottom, and when the bottom hits your mind wakes up and has to ask a question, "Do I really want more?"


But let's say I'm able to refill that glass of wine with just autofilling alcohol, filling to infinity, so you never stop, that will change the dynamics of how often you wake up and ask, "Do I want more?" Especially as that has an intoxicating effect as you're drinking it.

[00:30:00]
So YouTube can do the same thing, they can rip off the bottom of the bowl and autoplay the next video, which is responsible by the way for more than fifty percent of views on YouTube.


Facebook and Instagram can make feeds that infinitely scroll, rip off the bottom of the feed, make people fall asleep and into a trance when they keep scrolling. These are the more innocuous ones but when the asymmetry happens is when I as a persuader behind the screen know a lot more about how you, or you the teenager's psychology works. I mentioned the Snapchat example. I'll use a different one this time.

[00:30:30]
Let's say you Daniel, go dormant on Facebook. You stop using the product for like a week because you've got more important stuff to do, but Facebook sees you and they say, "Hey, I kind of want Daniel to be reactivated." In fact, there's actually an entire field of growth hacking called come back emails or come back notifications.





[00:31:00]
Come back are basically what are you send this person to make them come back? And so what do I do? Well maybe I can show your friend Jordan, who's sitting there scrolling through Facebook and looking at stuff, and I can show Jordan photos where you're in the photo and then you click on the photo, Jordan clicks on the photo, and Facebook recommends, "Hey, we noticed Daniel's face is in this photo, do you want to tag him in this photo, yes or no?" You don't even have to type his name. You just hit yes. It's a big blue button. Just hit yes. And when you hit yes, then Daniel, that dormant user, gets that email saying Jordan tagged you in a photo.




[00:31:30]
It's as if Jordan made his own independent, sovereign choice to tag Daniel in this photo, and when you Daniel see that, you're thinking, "Oh man, Daniel wants me to see this thing, I'd better interrupt what I'm doing, let that project move aside and I'm going to go back and see what this photo is."


Of course all of this is orchestrated by the puppet master upstream, and this is happening, not explicitly. The example I gave is not an explicit thing that Facebook does, but these kinds of things are happening across LinkedIn, Snapchat, Facebook all the time, and this is the asymmetry.


The people on the other side of the screen know a lot more about how the psychology of the people that they are influencing works. They also have AIs at their side that help them predict which color buttons and which friends would help reactivate you as a dormant user.

Daniel:
So how does-

Tristan Harris:
It's really not a fair fight.

[00:32:00]
Daniel:

Talk to me about how AI works. How does it know if I'm going to respond more to a green or a yellow button or a word like this or a word like that?

Tristan Harris:
Well it A/B tests on people just like you. So A/B testing is I'll send audience group A a bunch of buttons that look blue, and for the audience whose behavior looks a lot like yours does, it'll see whether or not if that blue button works better than the green button, I'm going to start sending Daniel to the blue button.

[00:32:30]
And every political campaign, people don't realize this, but I think the Trump campaign said that they had tested 66,000 variations of every single ad before they ... by the time it was actually reached to the audience.


And so this kind of A/B testing, we're kind of hill climbing our way, or race to the bottoming our way to the bottom of the brain stem to figure out these are the kind of word choices that activate your amygdala, your outrage. This is the colors that light up your brain. These are the words that most make you polarize, most make you hate the minority population. You can really activate people in a perfect way now.

[00:33:00]
Daniel:

So you mention that there's a AI playing chess with our brain and that there's an asymmetry. Now, when most people get on Facebook they don't think that they're actually in a win lose game. There's information warfare going on and that there is someone competing for their attention.


So that means that they're in a game that they have not even been aware of or consented to and even if they had, they wouldn't do well.

[00:33:30]
So talk a little bit more about like where is AI at with it's ability to play chess against humans, and how does that level of AI compare to what Facebook is running and et cetera?

Tristan Harris:



[00:34:00]
Yeah, this is really important. I think it's particularly important for the AI community who always talk about, you know, we have these hypothetical future scenarios for the AI runs away searching for a goal and what if we can't control it? What if I were to tell you that's basically what we have right now. That the Facebook newsfeed is a runaway AI and YouTube recommendations are a runaway AI that are pursuing a simple goal of ...


Section 1 of 3[00:00:00 - 00:34:04]

Section 2 of 3[00:34:00 - 01:08:04]
(NOTE: speaker names may be different in each section)

Daniel:
Our runaway AI that are pursuing a simple goal of whatever keeps you on the site longest, and you can't even control the thing because now it's tearing into 2 billion people's thoughts. So let's go into what you just said about playing chess.

Tristan Harris:



[00:34:30]
You know how when you're doing something and some friend of yours sends you a link, and the link is a YouTube video and you click on it thinking, "Okay, I'm about to interrupt my work but I know those other times I watched a bunch more videos on YouTube but this time, this time's gonna be different. I'm just gonna watch this one video and then I'm going to get back to work." And then somehow you wake up after two hours and you're like, "What the fuck just happened?" And it's because YouTube was playing chess against your mind.






[00:35:00]






[00:35:30]
As soon as you landed on YouTube, you activate a super computer, to figure out based on everything that's ever worked on you in the past, what every other video that's gotten you to watch, what's the video I can show next in that right hand side bar, or make auto-play after this video is over? That's the perfect video that you will find irresistible. That everything in your body ... I'm not saying it's a cat video, I'm talking about for you and I it's like the perfect Buckminster Fuller video that's like, "Oh my God, I've never seen that one," or whatever it is. YouTube is basically succeeding at playing chess against us when we find ourselves falling into that trap. And we know what happens when human beings play chess against computers, and we lose. When Garry Kasparov lost playing chess, he lost for all time. There's never a moment from that moment onward where human beings are simply better at playing chess than computers. At that point forward, the computer can see more moves ahead of the chess board for all of human history for all time ...

Daniel:
So I just want to-

Tristan Harris:
... and this is what's so dangerous ... oh go ahead.

Daniel:



[00:36:00]
I just want to emphasize this particular point. Most people who are not already chess masters, aren't aware of how much better at chess Garry Kasparov is than they are. There's a power-law distribution in chess and its orders of magnitude better than they are ... like he is. And the AI that beat him, beat him at an even further gradient now than that. And we get to a point where the AI's actually ... it doesn't make sense for them to get better than they are at chess because we are so incomparable-

Tristan Harris:
Right.

Daniel:

[00:36:30]
... at this point. And then, more complex things, like GO and things that have to do with information like Jeopardy and et cetera. So, then we realize that those AI's are no longer rare things that only happen in those environments, that those AI's are of a similar order of capacity to what optimizes Facebook newsfeed algorithms. And so if someone starts to get, "Okay, so there actually is a chess game against my brain but instead of just being for the idea of a win, it's for a $500 billion evaluation so it has that much motive behind it. And it has this much data science behind and this much time to do what it does and"-

Tristan Harris:
[00:37:00]
And you have to worry what you spend on it. Every moment you spend on it, you're feeding it with resources which get reinvested into more computing capacity so it can predict even more moves ahead of the chess board against your mind so it can win even more the next time than it did last time.

Daniel:
And I'm nowhere near as good as Kasparov and I don't even know I'm playing.

Tristan Harris:
Yep.

Daniel:


[00:37:30]
And so then we say, "Oh, fuck," right? Okay. So it seems like human choice is actually a very delicate thing and to the extent that we are going to be able to engage it well at all, we have to be very protective of the things that can otherwise hijack it. So, talk to me more about the topic of a hypernormal stimuli 'cause you said that YouTube or Facebook is gonna know what to put that will keep me more than other things. And, interestingly, we can say it's gonna be different for you and I than for someone who's got the cat videos but realistically, there's some more basil shit for everybody that's going to respond. So talk about that.

Tristan Harris:
[00:38:00]
Yeah so there's ... this is important we also talk about personalization. There's different persuasive techniques that will work on different people. We have different vulnerabilities. For example, let's just take teenagers 'cause it's a little bit less personal, different teenagers are differentially vulnerable to different things so some people are more vulnerable to fear of missing out. When they see that their friends are doing something without them, that hurts or pulls on them into a stronger degree than several other people who are just tuned differently, where it just doesn't actually matter to them.

[00:38:30]
A different vulnerability is social evaluation. Some people are really sensitive to how often and how much they're socially validated, especially if you're in a developmentally sensitive period of being a teenager where we really don't know our own ... we don't have security in our identity and our own self-validation yet, and we get it from our peers but now Facebook and Instagram and Snapchat are controlling how frequently they're dosing out those 15 likes into my social validation.

[00:39:00]





[00:39:30]
And, to your point about why this is called a hypernormal stimuli is that if you think about back to tribal dynamics of hundreds of thousands of years ago or something like that, how often do you get social validation? What is the frequency and the way in which you would experience social validation? It's important for us to feel or desire or need that, it's important for our in-group and out-group and feel belonging and to feel part of a tribe and community and all those normal dynamics are really important. But now, we have this exponential form of ... or really not exponential, but a kind of an exaggerated form of social validation that's occurring at a frequency and dosing in amplitude in variability that we've never seen before.





[00:40:00]
So the slot machine isn't filled with colorful lights. The slot machine that you're pulling every 10 minutes is filled with your friends validating you. And that is a totally new situation that is highly addictive because when you get that much social validation from the screen and you don't get it from the real world, suddenly the real world social validation isn't nearly enough and you're, sort of, edged up onto this higher plateau of needing the kind of social validation that we get from the screen.

Daniel:


[00:40:30]
Now, how does likes on Facebook relate to ... and obviously I'm setting up a question based on conversations that we've had because I want people to get this whole narrative. How does like on Facebook that gives social validation relate to, say, people eating too much sugar or porn moving towards extreme XXX that damages relationships, like what's the narrative arc across that?

Tristan Harris:



[00:41:00]
Yeah, well, I think you could actually describe these things in a probably better way than I could. Porn is hypernormal stimuli of sexual opportunity ... I think what I'd want to say about this is one of the interesting dynamics with the attention economy is that in a world where everything's already porn, the way to get more attention is to get even more radical, right? In a world where there's already lots of political outrage, the way to get more attention is to move further down the radical [inaudible 00:41:08].





[00:41:30]
So this has actually been shown by a former YouTube engineer who's joined the center of technology, [BM Cheslay 00:41:15], whose done amazing research on how YouTube has a preferential steering toward radicalizing divisive and also higher, more extreme conspiracy theory type videos. In other words, if you AirDrop a human animal and they land on one page on YouTube, let's say of a 9/ 11 video, a regular 9/11 news video, two videos later after auto-play it's driving them towards the conspiracy theories of 9/11. If you AirDrop a person into the moon landing, two videos later you're inside of chemtrails or the moon landing is a fake or one of these kinds of things.




[00:42:00]
And he found this is a systemic bias throughout the whole thing because of the dynamics of the attention economy and it's race to the deeper and deeper root bottoms of the brain stem, which is just a metaphor for just what points us deeper into outrage, what keeps us more radicalized 'cause if these things are better at getting attention then that's what the algorithms have to put at the top of the menu.

Daniel:


[00:42:30]





[00:43:00]
So interesting, when you said they're split testing to not just me but also to demographics that they fit within. Demographics that they fit within go to basil motivations beyond what I think of as me unique self that are very interesting so when I go to YouTube, if I watch a video on mathematics, I'll get more videos on math. But if I watch a video on Bruce Lee, I will get a whole bunch of UFC videos with titles that are the titles that are most engaging but what the fuck is that about, right? 'Kid kills two UFC people in crazy attack' or something like this. Now the moment I click on one of those, they preferentially fill my news feed at TenderOne more than the math videos do, so then I watch 10 math videos in a row and I still got UFC stuff in there. I watch one UFC and there's no math videos left.


I don't even like that stuff, I would even ... right? But, I'm part of a demographic where my biology said fights are evolutionarily very relevant 'cause they're dangerous, 'cause I might die from it, 'cause whatever and so, I get a basil ganglea highjack in a different way so they're not just paying attention to where my time and attention is actually going. But what will have a stick in this characteristic for maybe the worst reasons.

[00:43:30]
Tristan Harris:







[00:44:00]

Exactly! And the stick in this characteristic for the worst reasons is, I think the defining thing that's driving the whole system is, how does the computer know that these are the worst reasons to keep you there? How do you even code ... and you use the word ethic since word of that around but this is really just, what does it mean to have something that is the deepest basil hijack? I mean, we should have a classifier maybe for how deep something's going to have a revolutionary instincts and then counter optimize for it or something like that but one of the other dynamics that's relevant to what you're talking about is, as you said, it only takes a couple of those UFC videos to start getting pushed deep into the sidelines of all that more more radicalizing stuff. Because there's only so much tension, that actually starts to occupy a greater and greater percentage of your intentional footprint and so then you could imagine conspiracy theories becoming the norm.




[00:44:30]
This is true by the way also with Facebook. So it turns out that for a while, Facebook was optimizing, actually as of last year, do you remember there was a moment after the election when Mark Zuckerberg wrote this big piece saying, "We have this new mission, in our world it's not just to make the world more open and connected but it's to bring the world closer together through groups and communities." And so they had this goal of, "Well hey, Facebook groups therefore might be the answer." So let's start maximizing how much people get driven into groups, right? And so guess what, when you combine their goal of maximizing time spent with maximizing groups, what groups do you end up having people join?


[00:45:00]






[00:45:30]
So, if it turns out you join one group called, I don't know, some kind of doctors group or something like that, it's gonna recommend the anti- vaxers group. The vaccine conspiracy thing ... with mothers. If you join Pizzagate, it's gonna recommend chemtrails to you. And so it creates these groups that basically are even stronger at persuading those certain beliefs because the most active members of the group are the ones with the most radical beliefs and then they post all this stuff that's very persuasive too 'cause it's getting socially validated by people. You're suddenly inside of a new environment, a new social environment which all these people believe things that are radical but they seem true because everyone else believes them. I see colts actually earlier in my career and I would join these groups where you would see really smart people influenced by kinda 'out there new agey stuff' that didn't feel quite right but if you're sitting there saying, "Well that's a doctor who is believing this, that's a NASA space scientist who's believing this." They're smart people so surely this must ... they can't be too crazy.


[00:46:00]
But that shows you how much of our epistemology and how much our way of knowing what's true is influenced by what other people seem to say is true and so the reason Facebook I think is so dangerous is, it's a social persuasion machine. It can validate things that are so far from true but create this unprecedented level of validation for things that are just so far away from reality. And it's been shown to me, one of our research group partners, Renee DiResta, has really done this work with that team groups and shown just how powerful it really is.

[00:46:30]
Daniel:





[00:47:00]

Okay so if we think about what's unique about homosapien, we say alright, a horse is up and walking in 5 to 20 minutes, and a human takes a year. And we're like, how many multiples of 20 minutes go into a year? That's ridiculous, right? And even our next closest of kin, a chimpanzee or a gorilla can hold onto mom's fur from the first few minutes as she moves around and we can't even move our head for three months. Now we're like, okay, so why are we so embryonic for so long and take so long to develop? Well, because unlike the other animals, we change our own environments. We're niche creators, so we don't just live in the congo or the amazon. We figured out how to go from the arctic to the tropics to the fricking everywhere, right?





[00:47:30]






[00:48:00]
The aquatic people, the mountain people and then we also learned how to change our environments to create cities and be city people. And as a result, if we came hardwired for a certain environment, we'd be un-adaptive quickly because our whole goal is to be adaptive to lots of environments so we come very soft-wired to learn how to be adaptive to new environments which means we are more impacted by our environments by any creature. That's the whole just right? It's 'cause we create environments, we have to in-turn be created by them to be adaptive so, we're mostly not controlled by our just genetics but our memetics or our genetics selected for memetics rhinoplasty and the end of age being put by environment. And so, we look throughout history at what people have believed and how they've behaved as a result to their environmental conditioning's and ubiquitous patterns that weren't common within an environment.








[00:48:30]
We say, okay so there was a time where everybody in a certain area believed that God was Zeus, and there was another phantom of God surrounded blaspheme to give Zeus was the worst thing and nobody believes that now. And they had the same genetic brains that we do, right? And they were as smart as we were. And, there was another time where everybody believed the earth was flat or using the zero was witchcraft where whatever the fuck things people have believed right, ubiquitously in the whole populations. And then we also look at, okay, we've got an environmental say, Janjaweed or let's take Buddhists, we're across three millennia, we've got tens of millions of people who because of a way that they're conditioned, same genetics basically, the way that they're conditioned, all won't even hurt bugs.



[00:49:00]






[00:49:30]
Then we go to the Sudan and we look at the Janjaweed and we see an entire population where everybody hacks people with machetes. And we say okay well human nature is ... human nature is Buddhists and Janjaweed. And so human nature is radical plasticity to be coated by its environment but on how to behave within an environment. Okay, so as soon we get that, we get the importance of, that our adaptive capacity is to adapt to what the environment is saying is fit. And now we have digital environments and so many people have more friends online than in person, spend more time, have more contacts, they'll have more total information intake and have more output. They have a level of shyness in person that they don't have online so they engage more et cetera. So the digital environment is their primary environment that is psychologically conditioned there. And humans are psychologically conditioned both in their beliefs more than anyone wants to admit.





[00:50:00]
We wanna think that we're the really smart rational critical thinking ones, that would've never believed in [inaudible 00:49:46] switches just [inaudible 00:49:47] think about that for a minute. And, or that we're really the ones that are ethically self-directing that just happened to have an aesthetic like everybody else around us. So, when we get that, we say, okay, what does ... if we're building digital environments that are inexorably coding, our deepest patterns of belief and behavior. And that just is, whether we want to or not, it's going to. What do we do with that? What is a responsible way to relate to the facts of that?

Tristan Harris:
[00:50:30]
Exactly. I mean, so that's really the essence now, we sort of established the realm of the problem and the problem is the same as the solution which is just that we are embedded in a fabric that conditions some of the medics in our choice making and our sense making. And since there is no subtracting or removing the fabric, there's no vacuum. They simple [inaudible 00:50:42] what's the different fabric we wanna replace this with? They use [inaudible 00:50:45] been planning better for we live inside of our inhabit of city of sentence making and choice making called a smart phone and we've just [inaudible 00:50:52] that it's still the casinos and blue lights and AI and dangerous staff that makes people believe conspiracies.

[00:51:00]






[00:51:30]
The answer isn't just blow up the whole city and get rid of all technology. It's like, let's make a livable city with paying much more detail of attention to wisdom and human values. The premise of the phrase, 'time well spent' is because time is the finite resource. It's not that time needs to be optimized but time well spent over a lifetime is a life well lived. So what does it mean to have a life well lived? And to do that, I mean, our least vision of some of that with [inaudible 00:51:26] technology is to take a lens back at ourselves, to turn the telescope back at about how the human system works and say what do we actually need. So the first thing is, we have a body. So do we want digital technologies to completely ignore our bodies and just maximize for thoughts, screen time, digital interaction, consumption, virtual interactions, et cetera. Or do we wanna pay attention to the fact that we have a body.



[00:52:00]
So, there's gotta be some portion of our lives and our experience, that if our screens are supporting a certain sort of menu of choices, some of those choices have to be supporting the life outside of the screen. They have to be supporting the choices that we wanna make with each other, and using our bodies and going places. That's like the simplest example of some of these things but I think, Daniel, you have some places you might wanna go with this.

Daniel:


[00:52:30]




[00:53:00]
So, I wanna just double click on one thing that was in a side that you said because I think some people will maybe latch onto it and I think it's actually deep to some of this. So you mentioned driving people into polarization and gave as examples, conspiracy theory and anti-vax. So I just wanna actually say something about these things. Are there every conspiracies? Of course. Are all conspiracy theories true? Of course not. Now, how the fuck does one actually figure this out? So, what is a conspiracy? Well, some people have conspired to do something that they had some advantage for. Now do we incentivize people sharing information with each other that they don't share with everyone else 'cause the information creates a source of strategic advantage in a [inaudible 00:53:05] environment like capitalism or war. Well, of course and we see water gates [inaudible 00:53:09] we see enrons, we see the enigma program, we know those things, right. And for all the ones that we find out, there's probably a whole bunch of them that we don't find out.

Tristan Harris:
They are conspiracies, yeah.

[00:53:30]
Daniel:






[00:54:00]

Now, does that mean that the whole world is run by lizards? Lizard aliens, right. So, how does one go about making sense of something where the actual information might be intentionally hidden. There might be disinformation. People who come forward as a whistleblower might be real, like you here. Or, might just be wanting to give attention themselves and sell books. How the fuck do we figure something out that complicated if we take vaccines? Are we saying that polio vaccines did no good? How about the Gardasil ones, what about ones with Thiomersal verses not? Is it how many total vaccines happen at once? Have vaccines never hurt anybody? Well, that's silly. Are all vaccines genetically engineered to brainwash everyone into sheep or that might not be the story either. This means I actually need to have a [inaudible 00:54:14] point of view and very detailed sense making that is not a black or white all or nothing radicalized point of view.



[00:54:30]
Well, now that's a bitch because I have to actually learn how to think and I have to learn how to vet through a bunch of sources of information that have their own motive to tell me what's true. That is not my motive. And it's not just a motive of truth, so, you ask me. Which vaccines do I think are good and which ones aren't and which ones are problematic and where did the companies know they were problematic ahead of time, where did they know they were problematic after a certain point but didn't pull 'em? Where could they have not pulled them 'cause of their [inaudible 00:54:47] responsibility to stake holders? They gonna paint a very nuance complicated picture.


[00:55:00]
And I'm also gonna say, and I don't know the whole thing. I have a basey and probability on my best assessment. And the same is true, conspiracies. So, there's a deeper principle here also of, that one of the basil motivations to go to, is over simplified certainty in scenarios that actually need complex nuance understanding. And oversimplified certainty, will always lead to partial views that will be in direct opposition with other partial views which means radicalization.

Tristan Harris:
[00:55:30]






[00:56:00]
Yep, and that goes over simplified certainties will do better in the intention economy than things that are complex which simply by taking that much effort and time, just won't do better. If you had Facebook that was all about, Heres the really nuance complicated sense making around some of these conspiracy theories. People would just say, they'd throw out their hands and say, "No just tell me the simple answer." There's also a [inaudible 00:55:47] I know you are friends with Ken Wilber and one of the best lessons that I ever got, one of the most amazing cognitive tools I ever got, was the pre/trans fallacy. That there's different ways to hold a view and there's the naive [inaudible 00:56:03] way of saying medication's great, it'll just make everything vibrate and everything's wonderful and then a lot of people who are told well that's those crazy new age people and so we're just gonna ignore all that, therefore meditations is all not real and none of this stuff is real and those people are just voodoo.


[00:56:30]
Let's call that the savvy view. The first view was the naïve view. Next one is the savvy view. Then there's the wise view, which someone more like a you or a Sam Harris or someone who basically does embrace meditation but actually has the self awareness to see the complexity of what it's really about and what access it gives you. And the problem is that if you're someone who says meditation is great and that's all you say, it's indistinguishable whether you're speaking from the wise view, or from the naïve view.




[00:57:00]






[00:57:30]
This probably sounds sort of, actually not true in this example 'cause I made the naïve view sound deliberately so naïve. But for all sorts of things, I mean, people talk about drugs, there's a naïve view and then there's the wise view. People talk about, this is actually Carol Black, not Carol Black, what's her name? The woman who studied more reasoning on abortion. There's just all sorts of issues where you can hold a view from the naïve place, you can hold a view from a wise place but it sounds indistinguishable and the pre/trans fallacy is, are you hearing someone who's saying something really wise as the naïve thing. This actually happened in the [inaudible 00:57:22] 'cause you could say, "Hey, do we have a problem with immigration where we're actually, there's just not a control system on this thing, it's kind of the gates are off." And you can say that we do have that problem.


And so then Trump comes in and says, "We're gonna just slam down on immigration and do these things," and that sounds, I'm kind of bodging this explanation which is a sensitive topic. One of the other [inaudible 00:57:42] sort of things about this is that the things that have pre/trans fallacies, tend to be very controversial and so people tend not to want to talk about them at all because they can be misinterpreted so easily.

Daniel:
[00:58:00]





[00:58:30]
The example that you're giving is, is the answer to immigration keep all the gates open and let everyone with no discernment? No, that would be silly. Is the answer to immigration, keep everyone out for all purposes and get rid of everyone who's here that doesn't have proper documentation? Well, that's pretty silly. Those are both simple. And there's a certain bias to select them just because they're simple. And the view that says well, what is the right criteria to determine people coming in and not, and whether we should deport people or not or again, now we actually have to think through things pretty clearly and deeply and well, and you can't do that on a sound bite. And so simply, the optimization for sound bites is gonna be the optimization for fundamentalism.

Tristan Harris:



[00:59:00]







[00:59:30]
Yep, exactly. And I think a different way to phrase are ... the most important existential challenges we face are in this controversial territory. They're not simple. And so, one of the things that's most worrying to me, is the places that we need to spend our attention and the kind of sense that we need to have to answer any of these problems requires us to go into the most controversial areas and to have this complex new ounce of understanding which is gonna take time, effort, and discomfort. And so, we need to have the most empowered high agency people in our society who are able to change these things, to be living in that place. And that means that we're gonna be incurring this deeper tax and we're gonna have to talk about controversial topics. There's this other dynamic where people actually don't even wanna talk about these things because then they get tagged as, if they touch anybody [inaudible 00:59:27] there's no way to touch it without [inaudible 00:59:29] being careful.

Daniel:




[01:00:00]






[01:00:30]
Interestingly the first podcast we did on this show was our good friends Zak Stein from in overworld and from Harvard Psychometrics and we were talking about how to assess intelligence and how to develop it and to ask for practical tips at the end. And he's one of the leading thinkers in developmental psychology and education in the world and he said something that was so obvious and so simple but important. He said, " Read books and control your focus to be able to focus for longer than you think you can and your muscles don't get bigger unless you lift an amount of weight that's actually hard, otherwise there's no evolutionary empathist to build them and your attention won't actually get better unless you force your attention to keep staying when it wants to go." So he said, "Get off of Facebook and turn push notifications on everything off and read books that don't flash, and when your attention wants to go, come back and train how to actually have attention so you start to be more sovereign over your own attention. Duh!

Tristan Harris:
Duh, and also just to notice that we have embedded the default settings of any human animals that experience with the default settings of technology today are the exact opposite of everything you just said.

Daniel:
Yep.

Tristan Harris:
[01:01:00]







[01:01:30]
And it's also the immediate lever point to the solutions. We can call our work on this, this is not just raising public awareness and giving random talks, this is isn't about creating cultural of awakening but this is a invisible, suttle threat to our capacity to deal with any challenge because clearly our attentions and spans have been shortening dramatically by all these technologies. They're not making us think more deeply. And the passivity of the experience is also critical. We are not thinking actively. We are passively thinking and recognition in the brain is a much easier [inaudible 01:01:24] to process than recall. Pre-recall and thinking creatively and in co-working and recruiting those cognitive resources to work out a problem is very different than what we see in the study sheet for a test and say, "Oh that's the answer to number 2, or that's easy, I would've gotten that." It's much easier to say, "I would've gotten that," than to actually throw yourself through it and say, "Ah man, this is actually hard."





[01:02:00]
And so we have the mistake of being experts and thinking that we're not experts and there's another dynamic on social media that exposes us to so many things across so many different fields that makes all of us feel like we actually are the experts in every field because we clearly read a lot about it. And so it gets very very complicated very quickly where we are.

Daniel:
So, wait. I've only actually read the abstracts and maybe I didn't even read the abstracts. I read the title of the magazine that talked about the abstract in a Facebook post but I read enough of I kinda know what's going on, so, simultaneously-

Tristan Harris:
But I've re-shared with my friend, Daniel's really smart so maybe he's right, so therefore it must be true because he retweeted it. Even though, keep in mind, Jack Dorsey retweeted the Russian propaganda, Donald Trump retweeted the Russian propaganda. There's a lot of ways we get fooled by simply re-sharing people that we think are experts.

[01:02:30]
Daniel:

So then, what happens is, we have increasing confidence about stuff that we know progressively less well.

Tristan Harris:
Yep.

Daniel:



[01:03:00]




[01:03:30]
Okay, so I want you to come back to ... we've actually covered a lot of territory and I wanna come to a few parts that have an important aspect of insight that we haven't got to yet. You were talking about Facebook and it's really not Facebook, it's any platform that someone is interacting with deeply and sharing personal information. But let's take Facebook right now since it's been the example, having information about us that is maybe we can call it privileged information. There's information that maybe we don't even know they have. So talk to me about privileged information and if there's a group that's creating a digital environment that is conditioning humans, how should we create that? That's a very complex topic, there's a lot of ethics and design in what is desirable but is there at least something about incentive alignment in not having incentive mismaps that's important. Talk more about that and the fiduciary agreement.

Tristan Harris:



[01:04:00]
Absolutely. So right now we have ... so long as the consumer's view these technology products as simple tools, Facebook is just a tool so what I'm gonna do, I'm gonna go to the website, I'm gonna sign up, I'm gonna hit agree on the terms of service and when I hit agree, I'm entering into a period of pure relationship, a contract relationship. I am just as response ... if something goes wrong here, I'm addicted. My fault. If something goes wrong with certain data that is used to advertise to me, well I clicked agree on the agreement so therefore I'm responsible for giving that information to Facebook and not really reading that agreement. This is a huge problem because as you already mentioned, and as I learned from Jordan, that there is a different kind of relationship operating here.

[01:04:30]






[01:05:00]
I wanted assymetry. Think about you, and how much you know about your own brain and what color to light up your brain and the private information that you know about you. Then think about Facebook and how much it knows about both privileged information that you have knowingly shared with them and then the non-privileged information. The fact that when it leads your clicks, it actually knows you're getting five personality traits based on your behavior along. [inaudible 01:04:53] it knows more about us than we know about ourselves. In situations where we have one party having asymmetric knowledge to exploit the other party, take an attorney and their client. Take a psychotherapist or a doctor and their patient because on two counts, one is on the attorney client situation. The client is sharing deep, personal, privileged access information, personal information and the second is, that the attorney knows a lot more about the law that you do.



[01:05:30]






[01:06:00]
So on two counts, there's this deep assymetry. Same thing with the doctor, if you line up side by side with that, Facebook, how much power does Facebook have to exploit you? It knows way more about how all peoples minds work than you do. It know a lot more about the privileged information that you've shared with it and the unprivileged information including what colors light up your brain and what word choices on political ads will most activate your enigma and all of these kind of things. And yet it's governed with this peer to peer contract relationship. Where we have a name for the first two kinds of relationships, the attorney client privilege, financial services and their customers which is a fiduciary relationship to govern this assymetry. And my favorite to describe this, is the priest in the confession booth. Because a priest also has asymmetric access to you. But imagine a priest or imagine Facebook if it's a priest, that's listening in two billion to two billion confession booths, the same priest.



[01:06:30]
And it listens to your conversations outside the confession booth with everyone you ever talk to, and it knows your location, and it knows who's ex-romantic partners you clicked on at two in the morning or three in the morning. And it knows the colors of the light of your brain, and it's got a super computer next to it that processes all two billion peoples confessions to predict confessions you're going to make that you don't even know you're going to make and button choices you're gonna click that you don't know you're gonna click. And to [inaudible 01:06:43] your gonna have that you don't know you're gonna have. And then the last part is to imagine that the entire business model of this priest in the confession booth, is to sell access to that to another party so that they can manipulate you as best as possible with that information.

[01:07:00]





[01:07:30]
We would never allow this to exist. This is absurd, and we somehow backtrack ourselves into this environment because we never realized how much power we are giving away to this entity. And so merely what you can say once you re-class this and say, okay we can have priests in confession booths, but they can't be governed by a relationship where their entire business model is to sell access to manipulate you. We can have a priest in a confession booth where the church is just paid by some kind of public tax or we can have a priest in a confession booth where their client pays the priest or something like that. But we at the very least, cannot have the business model where priests make money by selling access to two billion peoples confessions. And I think that's gonna require re-classing Facebook under a different kind of law, under fiduciary law.

Daniel:

[01:08:00]
Okay so you said a lot, now I want to highlight a couple parts of it. So, whether we're talking about a doctor or a priest or a lawyer, or any of those relationships, we're basically engaging them ...


Section 2 of 3[00:34:00 - 01:08:04]

Section 3 of 3[01:08:00 - 01:41:56]
(NOTE: speaker names may be different in each section)

Daniel:
They were basically engaging them on our behalf, as an extension of our own choice and our agency. And so the key of the fiduciary agreement is that they don't have an agency with respect to us different than our agency for ourselves. They aren't trying to win at a game against us, because we wouldn't want to share that information with them, then.


[01:08:30]
So the lawyer only wins if we win, right? They're working on our behalf. And the priests that offer it are supposed to be that kind of dynamic. So then we're willing to share the info with them, because they're operating on our behalf.


I wouldn't want to share a bunch of personal info with someone who was going to use it to try to sell me shit that I didn't want or operate against me in some way. Or empower others to do so.




[01:09:00]
Now, so we talk about privileged information, right? I share privileged information with the priest. But you said something that was really key about listening to my other conversations, the priest doesn't actually get to hear what I say to my wife. Or my friends, or my kids, or my business partners. Just what I chose to share with the priest, which is already a lot.






[01:09:30]
But if I write to my wife on Facebook Messenger, I'm actually writing to Facebook as a platform and then it's inter mediating that message and then sending it to my wife, which means it has the info from that message. And it knows not only what am I saying to her and what am I saying to my actual therapist and my actual lawyer that I might communicate with over those channels, but also which channels I'm actually communicating with.


And so then we say, "Okay, has there ever been a precedent for that level of privilege escalation?" There's just no fucking precedent. And we aren't even using the level of precedent that we have for things way less powerful. So, at minimum, a fiduciary agreement might not be adequate, but at least as a starting place we have some precedent in law that says if people are sharing that much information and if there is that much assymetry of power, a fiduciary relationship is necessary.

[01:10:00]
Now that would completely fuck the business model of these businesses. Right? Because if Facebook said, "Okay, we're going to have our agency be your agency, an extension of your agency and not try to maximize time onsite so that we can increase the cost per click value of what we're sending ... we get the advertisers to pay."


Well, their stock price just plummeted and someone else who's willing to do the same thing beats them. And so how do we deal with that?

[01:10:30]
Tristan Harris:

I think these are questions that have to get worked out, because, as you just said, this instantly negates the business model. And there's no way for Facebook to be in its same position without that business model. And it would be like rapidly changing from all of fossil fuels. I think these environmental metaphors are very similar.


[01:11:00]
Waking up and recognizing that you have these polluting fossil fuels that are powering your whole economy. And even once you've realized you don't want to be powered by that, you can't just do a full Indiana Jones swap where you take one off the other, and you replace it with something that's different. That gives you a different economic output, the weight isn't the same. It can't prop up the world economy in the exact same way.




[01:11:30]
And so much like fossil fuels, I think we're in the situation where Facebook is waking up, and it's become this kind of tobacco company that it doesn't want to be. And the question is how do you transition yourself, or how have external parties transitioned you to do something that is not as dangerous?







[01:12:00]
I will say that for the short term, they're working as fast as possible within their business model to try and align things. But fundamentally with power this powerful, as you said, I think of this as a new species of power. I mean, it's literally, I mean, I think it's the most dangerous species of power. We have two billion people's memetics, conditioning, environment, that are all strapped into this one AI run by people who are 20-35 years old for the most part, making decisions for people whose language they don't speak, whose cultural sensitivities they are not always aware of and can only find out retrospectively. And the machine's already running. And once they find out that the runaway AI has taken off, they can't just flip the switch off, because it's a company, and it's stock price.


And so how do we do that? I think that there's a bunch ... it depends on how radical you want to be in changing the situation and how much stability you want as you make the transition.

[01:12:30]
Daniel:

Okay, so Facebook has an agency that is not just our agency. It is not a benign tool like a hammer, it is actually an agent that looks like a tool. And so the bitch of that is that if we're looking at a person or we're looking at an animal, we're like "Okay, there's agency there that can do something, I need to be conscientious of that."


[01:13:00]
But if I'm looking at just a tool that I move and I click where I go, I don't think that. But if it has that ... so we gotta get that down. Alright. But its agency is ... It doesn't want me to be radicalized. It doesn't want me to believe a particular conspiracy theory or make me racist. That's not its agency. Its agency is just that I spend a lot of time online, right? Because that's how it sells the most advertising.



[01:13:30]
It just happens to be that I'm susceptible to getting very scared about Islamic fundamentalism and I can become a racist and spend more time on site because of my fear there. Or I can become a whatever it happens to be, Antifa or, etc. because those fear motivators, from an evolutionary point of view, or those hypernormal stimuli from a, I'm just clicking on Photoshop pictures of hot girls all the time, or whatever it happens to be, are what maximize my time on site.


[01:14:00]
So it is not wanting to make me a racist or an addict, it is just ambivalent to doing so and that happens to be what works to maximize my time onsite, which is what its agency is. So it's not guiding the world, it's making money, it just happens to be guiding the world as an externality.


But if the externality is that more people have more broken information ecologies and are more certain about more wrong things and are more fundamentalist ...

Tristan Harris:
[inaudible 01:14:20] more lonely and more depressed and more teen suicides and more ... all of those are externalities [inaudible 01:14:26].

[01:14:30]
Daniel:

Now the teen suicide and the depression and the bulimia or whatever it is are major bummer, but it's not an existential bummer writ large. It might be an existential bummer to that kid who killed themselves. But World War III is an existential issue. And environmental degradation writ large is an existential or at least catastrophic issue that could be increasing in its probability based on these dynamics.

Tristan Harris:
Yes.

Daniel:
[01:15:00]
So its just important to think about it at the actual level of consequence that it is.

Tristan Harris:




[01:15:30]
And that's what guides us every day. As you know, Daniel, I lose a lot of sleep over these issues, even though we're smiling a lot this conversation, it's because I think the need to change this is urgent. And honestly, because these, are the systems that people already live in so much, we'd be best off for those people who are aware of this to simply not be using them at all. Or to help those that are still jacked in, like the Matrix, to basically help the companies, including Twitter and YouTube and 4chan, and Reddit and all these other ones that had similar dynamics to do the best job as they can, because the consequences that they enact are every single day.



[01:16:00]
We live, this is not just a philosophy conversation as you know, this is a very real, practical, daily conversation. And we have found that we've been able to shift it and accelerate some of the positive changes that they're trying to make. But that's going to be limited to the quality of the thinking and the moral care and sensitivity in the minds and the hearts of the people at these three or four companies.

Daniel:
Okay. So talking about the practical side of it, when this podcast comes out, I want people to hear, you and I are both going to share it on Facebook. What the fuck?

Tristan Harris:
Yeah.

Daniel:
So, tell me how we reconcile that one.

[01:16:30]
Tristan Harris:

Well, it speaks to another version of the assymetry of power, which is that they don't just own identity and so forth and how often people get their dosing. And they don't just own epistemology and sensemaking. They also own the channels by which we can reach each other.


Is there a better channel to reach a wider number of people that are relevant to us than Facebook right now? Or Twitter? The other platforms have similar problems, by the way.

[01:17:00]
So that is one of the other challenges, I'm speaking more in terms of challenges. But, you can email people. But, of course, that creates costs, search costs and effort costs in actually trying to pull up email addresses of all those people, you'll probably have to message all of those people on Facebook anyway.




[01:17:30]
And part of this is because time is limited and we don't have much time to do anything these days, and we're so overwhelmed with GDPR emails and whatever else we're getting, it makes it very hard. We're very seduced by convenience. And convenience is another part of the attention economy, because the things that are the most convenient end up winning.


And so we're going to have to figure out how do we create alternative communication channels and make it easier for us to get information out to the people we care about without emboldening the existing actors.

Daniel:


[01:18:00]
Okay, so I want to get into a little bit of why this hard. So first, network dynamics are a real phenomena. So, if I go to Facebook and almost everybody I know is on Facebook, the ease, the convenience of I can share something and everybody can possibly see it. I can tag them all, I can scroll through my friends list to jog my memory of who should see this thing, whatever, awesome. Because most everybody I know is on there.






[01:18:30]
Now if I go to some new platform that somebody launches and there's like a tenth or a hundredth of the people that I know that are on there, I gotta use ten different platforms, it's a pain in the ass. So we've gotta, in order to have something else actually be competitive with that it's gotta get the right network dynamics. But that would mean that it would need a profit model that could pay for all the advertising, which is now also manipulation. That has to use similar type dynamics to get people off of Facebook and there ...


So then we say, now we're going deeper than social media, we're going to capitalism. And saying, "The more money that I make, the more capacity I have to make more money. I have access to better financial services, interest on interest, it is compounding. And if I distribute that money I have less access to be able to continue doing that."

[01:19:00]
So there is a self-referential process to the accumulation/ extraction dynamic. So, for something else to be able to out compete Facebook without doing the things that makes Facebook win within the particular game theoretic dynamics, even though so much of the cost is externalized, but so what, is actually tricky, right?



[01:19:30]
So we gotta factor all that. Now, specifically Twitter and Facebook and YouTube and all the ones you're mentioning are platforms, which means that they are a central company that is seeking to grow its revenue and profit as a company that is interacting with users, trying to create this symmetric relationship that's actually asymmetric.






[01:20:00]
So it seems like there's a problem with platform writ large. And that we either have to make the platforms fiduciaries, which means that they don't sell marketing advertising, which is a totally new model, at minimum. Or we have to not have it be a platform. Which means if it's kind of a decentralized protocol, it doesn't have a central orientation towards profit, it doesn't have the same motive or incentive to do ... the profit motive is actually core to why the algorithms do what they do.






[01:20:30]
So if in ... We've got a lot of people that are listening to this that probably work in the blockchain space, it's a decent-sized computing space. And everyone hears of a problem of a platform that could be replaced by a protocol. What are some things that you hope that people who are in that space, that might actually have access to a different set of economic incentives, think about when they are exploring alternatives that might be able to address some of these problems?

Tristan Harris:



[01:21:00]
This is a hard question. And I think you know more about the protocols right now than I do. But I think that they need to pay attention to attention as being the finite resource that they are in the process of managing. And yet to be careful about trying to price everything in terms of attention or turning it to this linear quantified type thing, because there's fuzziness in the nuance of human choice making. And some of the best choices in life are not rational, considered, and perfectly calculated and have these perfect transaction costs and things like that.



[01:21:30]
I think there's an important thing to ask, which is simply what are people's values? My colleague Joe Eddleman has some really deep views on one of the things that's wrong about how we viewed design processes in terms of technology. Even the idea of human-centered design process is that people goals.


That's what we thought. That the Enlightenment was all about, oh, instead of, we're all in service to the king or something, we can be our own king. We each have our own goals. And then everything is about helping people achieve their goals. And design is all about goal-driven design, helping people get their jobs to be done, tasks completed, things like this.


[01:22:00]
And instead, we need to ask, what does it mean to help people live by their values? Because that's what's sustainable. If you had a choice, like I'm visiting my mother right now, so if you had a choice between visiting your mother, as stated, that there's a goal of visiting your mother vs. what is visiting your mother really about? It's about being maybe present or loving or available and helpful. I don't know, there's a different set of values you might have.


[01:22:30]
If you could choose between visiting your mother, but not being loving, present, or available, or not visiting your mother, but still able to be for her present, loving, available, which one would you choose? You would pick your values. Which speaks to the fact that underneath everything are things that are actually important to us.




[01:23:00]
And what's usually unfulfilling or problematic for us when we have an experience where we reach our goal: "Oh, I did that thing, I visited my mom." But if I didn't actually do the thing that it was really about, which was being loving, present, vulnerable, aware, available, then I'm going to feel empty. "Well, I just kinda came up here." Which is where I am right now, so this is very present for me.






[01:23:30]
So I think about these things as how do you make values-driven design choices that actually activated in people processes, reflective processes that encourage them to think about what's important to them, why they're doing something, what they're really trying to get out of it. For example, is email about knocking off all my email, or playing this email whack-a-mole game and getting back to everybody? That's a goal- driven way to see email.


But values would be asking, "What is the most important thing that email needs to enable me to do today?" And right now email is not at all in service of our values. It's not really in service of our goals except in a naïve, transactional way to send messages to each other.




[01:24:00]
And so I think the revolution in software design and protocols is going to be to be sensitive to human values and for people who are interested in this, I really recommend they check out Joe Eddleman's writing and thinking about this. He's teaching a design course trying to get Facebook and Twitter and other, YouTube, Apple designers into this kind of thinking process.


Because what's really, where we've really been lost is in not being able to use our evolutionary instincts in environments in which this sort of wisdom about how to use them is present. Take something like cyberbullying. Right now this is seen as a content problem. We need better AI to detect cyberbullying and then just take down that bad, toxic content.

[01:24:30]
Instead of [inaudible 01:24:29] asking the question, what about the situation, the design of the social situation are enabling or encouraging cyberbullying to occur? And it's because the natural ways that cyberbullying would occur in a physical environment where I would see someone get hurt by the thing I said. And I'd see friends come to their side. And I'd see other friends of mine look at me if I'm the bully in kind of skeptical disgust.


[01:25:00]
All of those cues are gone from our social software spaces. And that takes basically paying attention to how we reconstruct some of those social dynamics inside of software. Which is a very different discipline of design. I really want to pitch that there's a different way to do design, and it's going to take a radically different view of human nature and values and really abandon our attachment to goals.


And Joe even thinks that this is kind of a political revolution. We'll see where that goes.

Daniel:
Okay. Two things in there that I think are maybe some of the most important points that have been said so far, so I want to repeat them, is you're actually speaking of goals as a kind of hypernormal stimuli.

[01:25:30]
Tristan Harris:

Yeah.

Daniel:



[01:26:00]
And, so I want to just kind of say something about hypernormal stimuli a little bit more overarching it than we have so far. If we look at developed, Western culture, we can see that overeating sugary foods, and sugary foods, healthy foods, fatty foods, is ubiquitous, where every January 1st, people are trying to change that. And suffering from diseases of over consumption and etc. And the body image issues, accordingly, are ubiquitous and etc.






[01:26:30]
We see that there's a whole generation that grew up with easy access to online porn, that had never happened before, that actually have sexual dysfunction, erectile dysfunction, intimacy dysfunction issues because the level of hypernormal stimuli of a triple X gang bang with the hottest people in the world that are Photoshopped and have plastic surgery and whatever, there's just no real life situation that will be able to meet that stimuli, so the hypernormal stimuli down regulates the sensitivity of normal stimuli.


We see that people are swiping left and swiping right on dating sites where everyone is putting airbrushed photos of themselves, or at least doctored photos of themselves to where actual, normal people aren't attractive to them. And so the thing that's supposed to help you build relationships is damaging relationships. The thing that's supposed to help provide nutrients is damaging your physical health.


[01:27:00]
Now goals are another interesting one, because checking off the checklist is another one of those things where, in the same way that sugar doesn't provide real nutrients, but has hijacked the impulse for nutrients, checking off the to do list doesn't provide a real sense of a meaningful life, but it hijacks the impulse for meaningfulness in the form of productivity and in the form of the lowest common denominator version of productivity.



[01:27:30]
And so, I can check off my to do list every single day, get to my desk [inaudible 01:27:25] and realize my life was utterly fucking meaningless and totally, extrinsically controlled, but I got a lot of hits. I cleaned my desk, right? And so to just recognize that, from the point of view of capitalism, addiction is super profitable.





[01:28:00]
If I am supplying something, it's not ... the naïve idea was that demand existed and so then supply emerged to support demand. But then, of course, supply wants to grow, so it wants to manufacture more demand. That becomes what marketing is all about. And if people become not just interested in my thing, but addicted to my thing, that's super fucking profitable. If I can drive the addiction, that's super profitable.


And so we notice these hypernormal stimuli, like sugar creates a dopamine response more than a salad does. But if I eat ... So I feel better in the moment when I eat it, but my life feels worse as I keep eating it overall. The salad doesn't give me a hit right away, but if I keep doing it, my baseline goes up.


So with every hypernormal stimuli, I get a momentary hit, and then a drop and a degradation of baseline overtime. That's called addiction.

[01:28:30]
With the things that are actually healthy I get no hit and my baseline goes up over time. And this is a characteristic arc of the way we have to move civilization, which is basically if I'm peddling addiction, what I'm doing is evil. If I am supplying hypernormal stimuli that are going to hijack people's choice making in a way that leads them to having a worse life, lowering their baseline, even if I justify they made the choice, fundamentally, the world would be better if I didn't exist, and my business didn't exist and I wasn't doing this thing.

[01:29:00]
So, then we say, well, why are we so susceptible to hypernormal stimuli? And there's an evolutionary answer, which is: in the evolutionary environment there wasn't that much sugar. There was a tiny bit of sugar in the form of berries, famine was a real thing, they were calorically dense, if we could get the calories in we'd have a better chance of surviving, so we got a big dopamine hit from getting the sugar, or the same with the fats or the salt, which were rare.


Now that they aren't rare, we still have the same genetics, that means we have to be careful with the fact that we've created an environment we're not genetically adapted to. We have to be careful with that.

[01:29:30]
But there's another reason that we're so susceptible, and you and I have talked about this a lot, I'm just putting this narrative in there, is that in the 250 thousand years of Homo sapiens existence, until not that long ago, we lived in tribes. And in those tribes, mostly what we were eating was nutritious stuff, so our body wasn't actually starving for nutrients.



[01:30:00]
We had an unbelievable amount of depth of social interaction, of 150 people that knew everything about us, who we knew, who we were bonded with, who had our back. And so there was the normal stimuli of what we evolved, being full, meant there wasn't a vacuum that needed filled.






[01:30:30]
Now if I don't have a tribe, and I don't have any security or people who really know me, and I think if anyone really knew me, they'd hate me and reject me because I'm such a fuck. And because I have a whole world that has set it up to be that way, then that void of human relationships makes me want to watch humans on TV and watch humans on social media and watch humans on dating apps and get a hypernormal stimulus of human interaction. So the hypo normal environment makes hypernormal stimuli more susceptible.


So we notice that when people go to a party where they're really connecting with their friends, they don't check Facebook every five minutes. And we notice that when someone is camping out in nature and they're eating food that they're finding in their garden, they don't crave sugar as much. When they're exercising more, they ...

[01:31:00]
So there's a place where, at a deeper level, the nuclear family, we've had money replace our need for each other. So we don't need each other anymore. I have no idea who the fuck my neighbors are and I wouldn't even like them. As opposed to we've always needed each other. And we've replaced it with this abstract thing.


There's a hypo normal world where the core needs of how to be a fulfilled human aren't actually met or possible that makes us susceptible to being hijacked by everyone who has supply that wants to manufacture demand in us. So when we think about the new platforms, to not be evil they have to not be doing this. Right?


Go ahead.

[01:31:30]
Tristan Harris:

And they have to, at a high level what you're saying, this sort of simplified, talking point version of it, is that the solution to addiction isn't abstinence, it's rich fulfillment. It's just being embedded in the kinds of things that make life awesome. And, like you said, when you have people at a party really enjoying themselves and talking to each other, people just don't even have that itch to look for technology when people are really present for each other in a meaningful way.

[01:32:00]
And therefore I think, Joe and I have been talking a lot about sort of a mission statement that our mission in technology right now should be to repair the social fabric. Meaning to repair these hypo normal environments to be at least baseline normal environments so that we can take on our world's most pressing challenges.



[01:32:30]
Because if we're empty on the inside, you can have the perfect sensemaking environment around what's true and which conspiracy theories are true or whatever, but if we're still lonely and at home and isolated, that's really problematic.


And so I think when we're thinking about the people out there that are building new platforms, how can your mission be to repair the social fabric and actually fill up the [inaudible 01:32:41] that are currently empty that need to be filled in first before we can take on anything that's worth doing.

Daniel:

[01:33:00]
So I want to add one more part to this and then I want to hear your response on it. So you and I have talked a lot about that rivalrous dynamics, win/lose type dynamics always cause harm. Direct harm, because I'm trying to win against you, because you're win and my win can't happen simultaneously. And indirect harm to the commons because we're trying to extract more resources from the commons than the other, or externalize harm there or whatever.


Now as we have win/lose games that are multiplied by increasing technology, then exponential technology, the amount of harm starts to become catastrophic. So we're looking at exponential technology in a particular space right now, social media. We could look at in extraction of resources from the environment or pollution or warfare tech or whatever and we'd see similar things.

[01:33:30]
Now, the saying money is the root of all evil, we can say, "Actually power over dynamics is the root of all evil." Evil's the idea that to advantage myself, I do it by controlling, harming, not respecting the sovereignty of you. That's the root of why would harm other people is that there's an incentive to do so. Money just happens to be deep in that stack, but it's not the only thing.

[01:34:00]
So money instantiation of the class of power dynamics. And that's what we are ultimately wanting to address, because in the presence of exponential technology, the power dynamics that have always led the war in environment destruction, and earlier civilization collapse, lead to levels of collapse we can't actually make it through. So we've got to do something different than we've ever done.



[01:34:30]
So now what that means is anytime I'm treating as an other, that I'm trying to get to do something that I want, I should have a red flag go off inside my head. And that means, and I'm taking a strong view here, and I have a company that does marketing and I have to try and constrain it from not being evil. Because to just not go bankrupt it has to work, so I understand this first person.


But marketing is fundamentally a manipulative endeavor to get other people to do what you want them to do. And ethical interaction is to interact with someone in a way that actually acknowledges their sovereignty, honors it, and tries to increase and support their sovereignty.

[01:35:00]
So I now you can't actually be happy and have a good life if you aren't sovereign. So the degree to which I'm trying to hijack your sovereignty will always be unethical. So then I have to say, "Okay, I don't want to manipulate you to do the thing I think is good. I'm the new guy that has figured out what good is, I'm going to get everybody to do it." That's the same fucking old story, right?




[01:35:30]
I want to interact with you in a way that makes you less susceptible to manipulation by everyone, including me. And better at making sense, yourself, of the world and making good choices yourself. And better at sensemaking choice making than me, so that you help a better world that I also get to interact in.


So then we say, okay, how do we build technologies that increase the sovereignty of everyone that interacts with it, rather than tries to guide their behavior, aligned with our sovereignty, our sense of [inaudible 01:35:44]. I would offer that that's one of the very deep things that we want to be thinking about when we're looking at new technology.

Tristan Harris:

[01:36:00]
[inaudible 01:35:51] completely. I mean, and this was, this is back to your point on infinite games and having shared interests. I always say that the only form of ethical persuasion is where the persuader's goals are aligned with the persuadee's goals.


The challenge becomes in ... it's an infinitely deep philosophical topic, obviously, the challenge becomes when, as in the case of younger people, or people who don't have their own goals, or they've atrophied their own sense of goals, what does it mean to interact with someone who hasn't developed their goals when you are going to be implanting new goals?



[01:36:30]
Because successful advertising is, you didn't actually have a goal, but I was able to convince you of this new goal anyway. And now I won, because you intrinsically are pursuing that goal on your own.







[01:37:00]
But I think the point that you are really getting at here is, and the way I always thought about this is, how do you have a developmentally appropriate form of persuasion? Or meeting the person at the level at which they are conceptualizing their goals and their interests in their values. Meet them where they're at. Make available a developmentally more open, or high agency, higher capacity to see that there are richer options maybe from where they are and not pushing them directly into any one of those areas.


You can't rip off the bandaid, or as my friend says, repeatedly kill Santa Claus, over and over again. When you're doing persuasion, it takes an incredible sensitivity and it takes a differential understanding that different people have different underlying values and goals and are in different places as in how they make meaning.



[01:37:30]
But a lot of it just comes down to an incredibly deep form of empathy. And one of the challenges that I think that we have to still face is how do you still do this when there's a certain urgency to the level of threats that we face. Because as the urgency and timelines go down, or go up, rather the timelines shorten, or tighten up, then the rush to persuade more effectively becomes more urgent.


And so these are certain dynamics that I think create ethical conundrums that are worth asking, because, at the same, while all of this is going down and we care about the philosophy, if the Titanic is sinking, we also have to take drastic measure to try to make sure it doesn't sink.

[01:38:00]
Daniel:

Okay, so this has been extraordinarily valuable. I'm happy that all of this is available in one place and we can share it with people. In the show notes we will make a link to Center for Humane Technology. We'll make a link to your personal website. To Joe Eddleman's work and to Jordan Greenhall's work that you mentioned.


If there was anything else that was mentioned in here, we'll find it and put it.

[01:38:30]
A couple questions, if people want to learn more, are there any other resources that you would guide people to? And if people wanted to support the work that you are doing, is there a way that people could do that?

Tristan Harris:


[01:39:00]
Great question. In terms of resources, I think those are great starting points. There's lots of talks and writing available online. It's been very popular. There's also on our website small ways that people can try to make their phone hijack their evolutionary instincts less. Even though we, when you've gone through the whole conversation we just had, clearly making your phone greyscale is not the thing that's going to solve the problem for everybody.


And in terms of support, this is really a movement. This is something that's going to take everybody. It's going to take a global village to solve this problem. Everyone should be aware of the problem. The first step is a cultural awakening, and making everybody know, wherever is possible. And dismantling some of the common narratives that this is business as usual or that this is just persuasion as we've always had it.

[01:39:30]
So that's the first part. We're always, we're a nonprofit. We always welcome financial support, for anyone who's interested there. And we're pursuing a lot of different pressure points to sort of navigate the system, both with inside systems with the tech companies, which we speak to frequently, government pressure and hearings. We work actively with the EU and US Congress. And also, other constituencies, like advertisers who actually fund the current platforms, to try and steer their dollars in a different way.

[01:40:00]
So we'd love help and support in any way people are interested in getting involved. Just check out humanetechnology.com. Or humanetech.com.

Daniel:


[01:40:30]
Tristan, this has been, like I said, very valuable. I appreciate your time. And really I appreciate all the work that you've been doing, because I happen to know what an insane schedule you've been keeping and the stress that you have been enduring to be at the front of this particular process. And it's important, and I'm grateful.

Tristan Harris:
Thank you Daniel. You've taught me a lot. And I'm really grateful for this conversation. Thank you for everything you have been out there saying and sharing.

Daniel:
Alright. That's it everybody.


If we find a product or service we love, we want to share that with our community.  In some of these cases we will partner with the provider in an affiliate relationship which may result in a payment or benefit to Neurohacker Collective.  We won't ever enter into such an arrangement or recommend any product or service we haven't researched or stand behind.

 All content provided on this website is for informational purposes only. This information is never intended to be a substitute for a doctor-patient relationship nor does it constitute medical advice of any kind.

recommended episodes

No Comments Yet

Sign in or Register to Comment