Navigating the Upper Limits of Human Potential in an Interconnected World
Steven Kotler, a New York Times bestselling author, award-winning journalist and the cofounder/director of research for the Flow Genome Project, joins us in this episode of Collective Insights to explore human potential. Kotler discusses the future of civilization and our world while factoring in technology, VR, AI, biodiversity, flow states, creativity and other states of consciousness. How do those things intersect? How do we actually build a good world considering those things? And what are some of the underlying philosophical considerations?
In This Episode We Discussed:
02:44 Intersection of optimizing human potential and protecting biodiversity
10:57 Connecting to wildlife as a crucial part of the human experience
16:43 Bridging the gap between technology and the natural world to create a viable and compelling vision of the future
23:44 Cooperative relationships between everything sentient
28:33 Scarcity based vs abundance based consciousness
36:12 Complex systems and design: Potential for extinction
47:05 What are some practical solutions to the issues within complex systems?
53:34 We’re more capable than we actually realize
57:39 Exploring and expanding human capacity to thrive in the 21st century
01:06:05 Flow state as a tool and how to have a healthy relationship with your prefrontal cortex
01:15:40 Group flow states and consciousness
01:19:42 VR, AI and flow
01:21:21 How hypernormal stimuli relates to flow states
01:27:18 Intuition and long-term inspiration in flow
01:30:51 Ethics of virtual reality
01:35:07 The potential to improve reality with tech is hindered by business models that make a profit off addiction
Complete Episode Transcript:
Daniel S:All right. Welcome everyone to Collective Insights, the Neurohacker Collective podcast. I'm delighted to have Steven Kotler here with us today. I imagine most of you are already familiar with Steven's work. If you aren't, go to his website and you'll see some of his work has spanned so many categories of research, really with the central theme being what is the upper limit of the possible? The possible for human emotion, numinous experience, cognition, performance, for the world or at large.
Daniel S:But in addition to exploring what is the upper bound of the possible, he's also exploring what is worthwhile within the possible. You'll see environmental work and solutions to large global problems and animal activist work, and work that wouldn't normally be defined just as potential hacking, but also meaningfulness considerations within the space of potential hacking.
Daniel S:As far as being a journalist and studying these topics and writing books goes, you'll see the kind of career as an author that everybody has wet dreams about, and very few people actually have happen. Many New York Times best sellers and other best sellers, and best book of the year awards, and people like Bill Clinton and Elon Musk and Richard Branson writing about how influential his books are.
Daniel S:He co-founded the Flow Genome Project with Jamie Wheal who's been on the podcast before, and so we'll talk about flow a little bit today. But because Steven's work spans what is the future of technology, what is the future of civilization, factoring in technology, what is the future of human psychology, how do those things intersect, and how do we actually build a good world considering these things? And some of the underlying philosophical considerations. Rather than be topic specific, this is going to be just a free from exploration, we'll see where it goes. Steven, thank you for joining us.
Steven Kotler:Thanks for having me. Thanks for the lovely intro.
Daniel S:I want to start in a place that maybe isn't the most predicted place to start. I want to talk about your book "Small Furry Prayers" and why in ... For someone who seems to be interested in potential optimization, what the animal rights work is about.
Steven Kotler:Great question. I don't know where to start. I run, with my wife, what's called Rancho de Chihuahua and it is a hospice care and special needs dog sanctuary in the mountains of Northern New Mexico. When we built it, we wanted to do everything nobody wants to do. We live in the second poorest county in America with the highest instance of animal cruelty, or one of them, and we work with small dogs, mostly Chihuahuas because they're the second most euthanized dog in America, but they're very underfunded and people don't want to work with them.
Steven Kotler:We want to do all of the stuff that nobody wants to do, and I got here, I mean, honest to God, I got here for two reasons, that we can get to the world changing part in a second, but I've always been an [inaudible 00:04:04] geek. Back when I started out as a journalist, I would go, I'd work two years to set up a trip to Madagascar, so I could spend three weeks hanging out with lemurs and hanging out with primatologists who were hanging out with lemurs.
Steven Kotler:I kept doing this over and over and over again as a journalist. Simultaneously, and this is just an honest answer, when you are a writer, success in your career demands a certain level of public facingness. You have to interact with the public, you have to get famous if you actually want to get paid to write books. It doesn't work any other way, and I didn't grow up in California, but lived in California. I knew a lot of people along the way who got famous and a lot of them turned into assholes, and I wanted to guard against that always.
Steven Kotler:So, how I guarded against that was I always had a component of my life was service. Before I was [inaudible 00:04:59] animals, in LA, I was running a non-profit called the Sports Writers GM. It was with Dave Edgar's organization, 826 LA and the LA Lakers, and we were teaching inner city kids how to be sports writers. It was a great organization, we were going to make a real dent in people's lives, and I hated the teenagers, I hated what I was doing. It wasn't working for me, and I met my wife and she was doing animal rescue.
Steven Kotler:I was like, "Wait a minute, you can take my love of animals and my service thing and I can blend them together and stop doing this?" Instantly, that's what happened. That's how I got here. How it has tied in with my work on technology and possibilities, the question you asked, so when I look at the challenges the world face, one of the first things I always look at is ecosystem services.
Steven Kotler:People know about [inaudible 00:05:49], right? We know about climate regulation and, "Oh, my God, climate change is a big problem." Well, there are 40 some odd ecosystem services and they're all failing as much as climate change, and biodiversity is one of them, but biodiversity is what the whole system is built upon in a sense.
Steven Kotler:To me, saving animals and protecting biodiversity, plants, animals, and ecosystems, for me has just always been fundamental. I love animals, I love being outdoors, all that stuff. When I got into technology, so when Peter and I came together and wrote "Abundance," I was worried. I'd just come off "Small Fairy Prayer." This is a long story. Are we going to tell the full long story?
Daniel S:Keep going. It's worthwhile.
Steven Kotler:All right. I had written "Small Fairy Prayer" which was a book about the relationship between humans and animals and the work that we do here at Rancho de Chihuahua. My publisher sent me on what we still think is the last great American book tour. After me, they stopped sending authors on book tours. They don't work anymore. But they sent me on, they were like, "Animal people, they want to meet you." They put me on 50 city book tour, I was on the road for frickin ever, and I was everywhere.
Steven Kotler:The events were very well attended, which was amazingly gratifying, but everybody who showed up as an animal geek. I wrote this book to try to communicate the need to protect biodiversity to the world, and it wasn't going any farther than the people who already knew this. That was a problem. That wasn't working, and at the same time, I had started looking at things like, "Okay, if we're really going to protect biodiversity, that means mega linkages. That means migration corridors, and things like that."
Steven Kotler:I don't know, do I need to explain these things? Or, I don't know your audience well enough to know if those words mean anything.
Daniel S:I think explaining a migration corridor is actually not a bad idea.
Steven Kotler:So if you look at the biodiversity crisis, if you look at the six great extinctions and the fact that species die off rates are 1000 times greater than normal, we've actually since known since the '60s, since the work of a guy named Michael Sule, who developed a field known as island biogeographic, which is essentially the study of ecosystems on islands, how to solve the biodiversity problem.
Steven Kotler:What Sule discovered which is foundational to this idea, is that if you isolate a population on an island, two things happen. The population gets driven to the extremes, you get gigantism and dwarfism as [inaudible 00:08:31] spreads out into every niche, and they're very, very vulnerable to extinction events.
Steven Kotler:That's primarily because the gene pool is isolated, and anything goes wrong, suddenly you have far less genes in the gene pool, and you're heading on an extinction pathway. What Sule's big realization was was, "Hey, wait a minute, for a snake, for an owl, for pretty much most of life, you don't need an ocean to create an island. You just need a four lane freeway." In fact, we've fractured the ecosystem.
Steven Kotler:So, how do you protect biodiversity? You stitch it back together. Migration corridors and mega linkages is this idea that what wildlife needs more than anything else, is continuous wildlands. There have been projects since then, The Wildlands Project first and foremost, which has been trying to create a linkage from Yellowstone to Yukon, going across the spine of the continent, it is probably still the most important thing we can do to prepare for climate change, and for species preservation, things along these lines.
Steven Kotler:I became very interested in mega linkages, migration corridors, these huge swatches of land, which just seems to be the only way to really protect species. If you're going to protect species, if you're going to care about this, you have to ask yourself a very simple question. Where the hell do we get a whole lot of land? How do we take land away from humans and give it to the animals?
Steven Kotler:I started looking at things like in vitro meat, cultured meat. 30% of our land is taken up by cattle, to say nothing of there's enough water in an adult steer to float a US Navy destroyer and everything else that goes along with meat. I looked at vertical farms, I looked at cultured meat, I looked at genetically engineered crops, all these things that free up land.
Steven Kotler:I was looking at all these solutions when Peter came to me and said, "Hey man, I've been thinking about this book about solving grand global challenges" and he had a whole bunch of stuff on the human side. How do you solve healthcare? How do you solve energy?" All the work he was doing at Singularity University. And very little on the environmental side.
Steven Kotler:I said, "That's funny, I've got the other half of your book" so we came together and that's where "Abundance" came from. To me, it's all ... I've been an animal geek for a long time, and it fed into the technology. I came into the exponential technology looking at how do you protect animals.
Daniel S:Okay. I have an interesting question for you, which is you keep saying you're an animal geek and that's why you got into this, but that's a statement of typology. But you also said that one of the things that happened with your first book on animals was that it was only getting through to other animal geeks and you wanted to break through. My question is, for people who don't currently identify as having a very close relationship to animals, or if they don't identify as animal geeks, why should they? Why is that an important part of the human experience that if someone's missing, they're actually missing something?
Steven Kotler:I answer it two ways. One, I answer it from a planetary perspective, right? Which is if you look at everybody's dire predictions for the future, climate change, we can model through with, sort of for a while. The biodiversity crisis threatens all of our lives. The web of life supports ecosystem services, all the things the planet that does for us for free that we can't do for ourselves. Amory Lovins, who runs the Rocky Mountains, calculated the value of all the ecosystem services we get for free.
Steven Kotler:This was let's say 10 years ago now. He did it with Paul Hawkins, right? It was $40 trillion then, so now it's probably $60 trillion. In other words, we can't pay for it, and the giant lesson of Biosphere 2 is we don't know how to engineer ecosystems. We have no idea. We can't do this ourselves, and we can't engineer ecosystems, and if you go with James Lovelock's predictions, which are very dire, but if you go with ... He's a smart man, and he's been right before, he says if we can't stop the loss of biodiversity, we're not going to make it out of the 21st century.
Steven Kotler:First and foremost, you need to care about plants and animals, because the future of the race depends upon it, A, which is the non-animal geek answer. The other side of it for me, first of all, you and I can go off in discussion about consciousness in half a second, it could go on for five hours. But if you're interested in questions about the nature of consciousness, who can think what thoughts and how does that work? Things like empathy and those kinds of questions, relationships with animals, close relationships with animals have taught me so much more than just relationships with humans.
Steven Kotler:I've learned really, really fascinating things about psychology and the human brain, and neurobiology, and all that stuff. The possibilities spaces that I explore, I have learned so much from animals along the way. Personally, if you're interested in these kinds of questions, you should be interested in animals, just from that perspective, right?
Steven Kotler:Then, I could just go on and on, and of course there is ... I'm going to use the word spiritual out loud, and I hate using the word spiritual out loud, but there is clearly something, for at least me, there's something much deeper that is nurtured by plants, animals, ecosystems, wild things, and it's very, very, very deep in me. Truthfully, I spend most of my time with animals and out in the wild, away from people. I spend a lot more of time with plants, animals, and ecosystems than I do with other humans. There's tremendous value there for me at a deep, soulful level, that is beyond language.
Daniel S:Yeah. All of those answers remind me all of the indigenous wisdom, the Chief Seattle quote of not being a web of life, but a strand in it, and what we do to the world we do to ourselves. The first answer is, if we make the human built civilization that depends upon a ecosystem but debases that which it depends upon, that it is a suicidal process.
Daniel S:Whether we're talking about biodiversity loss or ocean acidification or dead zones, or any of the issues, what we're looking at is that we are actually debasing that which we depend upon, which is pretty shortsighted, so we've got to get that right. And that which we depend upon is the rest of the living world.
Daniel S:As you mentioned, Biosphere 2 is a pretty serious scientific endeavor, and we just aren't there yet. So until we are really good at producing oxygen for ourselves and et cetera, then there's a very utilitarian calculation. Then, there's a trans utilitarian calculation, right? Which is that I know more about the nature of myself and selfhood in relationship with selves of different types.
Daniel S:That could be the same argument of why I spend time with people very different than you, and even more so, consciousness and even deeper different forms. I think this-
Steven Kotler:Do you know Paul Sheppard's work? Just out of curiosity. Are you familiar with him? The [pliostine 00:16:07] paradox. He's a really interesting thinker about animals and consciousness and you were talking, it just popped into my head. Paul, he was the first person to point out that animals were our original metaphors, and we still think in animals, right? He's as big as a gorilla, and blah, blah, blah.
Steven Kotler:They were our original metaphors, and we are literally tinkering with and driving into the extinction the very thing the brain helped you think on top of, and his point was that's going to have unintended consequences. You can't go down that road without ... I don't know if he's right or wrong, I just always thought it was super interesting and what you said reminded me of it.
Daniel S:I want to take two worlds together that is at the heart of you, and actually maybe the thing that I like most about you, is when I think about the digital natives who grew up without much in the way of interaction with trees and birds, but with screens from the beginning. They have developed an intuition for digital universes and not for the physical universe. There's all kinds of consequences to this, because in digital universes, there's no actual physics.
Daniel S:There's man-made programmed physics that's based on click optimization for marketing funnels and whatever, so there's an intuition for physics, that even if we didn't study physics, every indigenous person in the history of the world just practiced in climbing trees and falling, and pouring water from one thing to the other. They start to get-
Steven Kotler:That's an interesting point.
Daniel S:We see a whole generation of people who don't have an intuition for the actual laws of the universe that they live in. The laws that they understand are ones made by other people that are actually exploiting them, which is a very interesting thing, right? Now, if we look at the digital natives, and maybe the people who are forecasting that universe, they say, "Okay, well the digital world is subsuming the physical world, so let's continue to forecast that and have a not totally dystopian view. We live in a lovely digital universe. We jack into the matrix, but it's a nice matrix, right?"
Daniel S:There's a bunch of singularitarian models like this where we upload consciousness to the cloud and we can just make lovely digital universes with full abundance, et cetera. I would argue as you and I have discussed, that that is a rigorously dystopian world for a bunch of reasons. But most of the people I know who are pursuing that, don't have a deep first person relationship to biology.
Daniel S:They have a deep relationship to technology, and might have been somewhat alienated from biology, other than their own experience of consciousness and interacting with tech. Now, there's a very interesting question that comes for me, is when you're talking about, "Well, the wellbeing of animals and our fate's tied with theirs in a very biologically ground point of view" and a historically grounded point of view, but then also forward facing what can tech do? There's an intersection of those, where most of the [inaudible 00:18:56] crowd is not very future tech oriented. A lot of the future tech oriented crowd doesn't have a very deep nature relationship.
Daniel S:There's a reason why almost all the sci-fi is dystopian, is there's just more ways to break shit than there are to build things. The Book of Romans, right? The path to hell is wide, and the path to heaven is narrow and steep. That's what abundance and bold et cetera, we were speaking about. But give us a vision of what is the future of the relationship between the human and the natural world, between technology and nature, man, machine, and nature, and humans and animals that is actually a viable, compelling future?
Steven Kotler:I don't know if I'm able to rise to the challenge. You were talking and a million other things popped up for me. I'm going to start there and then I'm going to work my way to your question, I think. One, I don't know if we ever talked about this, I co-founded an organization called Equilibrium, that literally was designed to bring environmentalists and technologists together, to bridge that gap.
Steven Kotler:One of the things that I saw when I was writing "Abundance" is you go into Silicon Valley and everything you just talked about, digital nativeness, that's taking place at a deep, cognitive level. You spend all your time staring at screens, your brain starts editing out what's unimportant, and what goes is the natural world, right?
Steven Kotler:I go into Silicon Valley and I'd be sitting down talking to people and I'd start talking about animals or plants or whatever, and they're just tuned out. It doesn't even register. The problem you're describing in terms of digital natives, is at a really deep, cognitive level, right?
Steven Kotler:You're essentially developing a cognitive bias where you're editing out the natural world, because it's not critical for your survival as far as your brain can tell. Of course, that's fundamentally wrong, but that's probably a left brain/right brain problem, but that's a different discussion. The answer to your question-
Daniel S:It's briefly just 'cause ... You'll riff on this. It's a third person/first person issue, is that I know animals exist, but I have no first person connection to them, so the simple grounding problem, I've heard about the chemistry of mangoes, but I've never tasted one. I still don't know what the fuck a mango is. There's no amount of chemistry amount of mangoes that tells me what the fuck a mango is until I taste it.
Daniel S:You've experienced certain kinds of states of connectedness, in nature, on the mountain, that if someone hasn't experienced, they can't learn it in third person.
Steven Kotler:Interesting question about that. Open question and here's why. I guess gets towards my answer about the future, and it ties in the work I do on flow. One of the things that flow, for anybody who doesn't happen to know, is an optimal state of consciousness where we feel our best and perform our best. It's when you get so focused on what you're doing at the task at hand, that everything else just vanishes.
Steven Kotler:Time disappears, sense of self disappears, action/awareness merge, and all aspects of performance go through the roof. So, what the research shows, and this is mostly stuff coming out of the adult development program that's been running at Harvard for 100 years, is that the more access to flow states you get over time, you obviously have to do a bunch of personal work. Just surfing a lot and getting into flow doesn't make you Buddha. You have to do some of the homework along the way.
Steven Kotler:But it does seem to increase empathy over time, so the more access you get to flow, the more empathy you seem to have over time, or wisdom or perspective, and more importantly to exactly what you're saying, you start to notice the natural world more.
Steven Kotler:Some of this may literally be nothing more than a shift from left brain processing to right brain processing. The right brain tends to think about things in terms of global ecosystem, how is everything connected? And just that shift kicks you out of that digital native, "I live in this." But I will also say on the dystopian side, I was just reading a study, it was talking about some virtual reality work.
Steven Kotler:I can't remember where they were doing it, maybe Duke, and people are, after as little as 40 minutes in virtual reality, are coming out and they're having a very hard time, and they've doing FMRI work to see how this works cognitively, they can no longer tell what's real and what's virtual. It's just all blended together, and that is obviously going to get more complicated with augmented reality. Yeah, the problem doesn't seem to be going away and I don't know what the solution is in all honesty.
Daniel S:Something that is interesting and first, I'm grateful that you actually say that, is that it seems to me that lifeforms other than humans are not simply resources on a balance sheet that provide some value to humans, ecosystem services and whatever, so we optimize them for purely utilitarian purposes, and that as soon as we could replace trees with carbon sequestration devices, then fuck trees.
Daniel S:It seems like other things that are a sentient actually are beneficiaries of the system. They're sovereigns, they're not just things to be exploited. In your model, which is a life has intrinsically value, not just it's extrinsic value for our commodification of it.
Steven Kotler:I make the ecosystem services argument, because people who don't value nature can value that. They're like, "Oh shit, my bottom line and my life? Okay, I get that." To me, honestly, I mean, it's a ridiculous argument, but I was thrilled, I mean for years people such as myself were like, "How the hell do you talk to people who don't see nature at all?"
Steven Kotler:Amory solved that problem by putting a number on ecosystem services, and I've been forever grateful. But you're right. Anyways, continue.
Daniel S:The thing that I was saying is that when you say you don't know what it looks like, it seems like what you know is that our technology being in service of the big picture, not just the parts, the relatedness of the big picture, the relatedness of the parts to each other, and having a care about everything sentient has to be part of the answer.
Steven Kotler:A, I completely agree, and B, we've talked about this a little bit before, but both of us fundamentally believe the more cooperative the system, the safer we all are. Certainly the lessons out of "Bold" and "Abundance" is cooperation trumps competition, especially if you move from a scarcity mindset with resources to an abundance mindset, right? Where you have more? When that thinking is there, you want cooperation. You don't want competition, and cooperation doesn't stop at the border of species.
Steven Kotler:I mean, it's really clear across the board, right? The whole world right now is on fire and dying of heatstroke, right? It's very clear, cooperation requires all of this. Flow is part of that, right? I've said for a longtime, what Peter and I wrote in "Abundance" is that we have the ability to solve most of the challenges that we face, but it's abundance or bust. We either do this or we're doomed, and the best way forward is cooperative, and that's got to extend beyond the bounds of species, I think.
Daniel S:You know, people might consider Chief Seattle the original hippie in terms of my quote earlier, but most people don't actually think of Einstein as a hippie, and his quote, it's an optical delusion of consciousness to believe they are separate things. That there's one thing that we call universe.
Daniel S:As soon as I try and say, "What are you?" And I take away the biosphere, you don't make any sense. As soon as I try and say, "What is a capital market?" And then I try and take away electromagnetism or the gravitational field of the universe in which its embedded, it doesn't make any sense. And we get to start realizing that we think about individual things assuming the background that they depend upon, but then because we're assuming it, we don't factor them into our choice making calculus, and then we debase the thing they depend on.
Daniel S:As soon as you get that, you get the cooperative is actually just a result of understanding ontology better, right? That we can't actually think of ourselves as separate things intelligently, 'cause it's just not true.
Steven Kotler:I always say, everybody I've ever met in the world ... Well, not everybody, but the vast majority of anybody I've ever met in the world where I've walked away going, "Wow, that person was really smart," there were two things that they knew really, really well. Evolutionary theory, and ecological systems thinking.
Steven Kotler:Those two things to me seem like whatever else you know on top of that, it's very rare that I bump into somebody where I'm like, "Oh, my God, wicked smart. Amazing what their brain can do" and they're not extremely well versed in evolutionary theory, and not extremely well versed in systems thinking."
Daniel S:Those are a couple of critical frameworks, for sure.
Steven Kotler:Yeah, I think so.
Daniel S:Okay, so you mentioned briefly the relationship between a scarcity based orientation consciousness, choice making process, and an abundance based one, and this is obviously core to the work that you have done with the book "Abundance" and others. For those who haven't read it, or who remember the details more than the big picture, what's the big picture here?
Steven Kotler:Well, let's just break down what I just said. Evolution tells us there are two operations when faced with scarce resources, right? You get to compete, fight over scarce resources, or you can cooperate and make new resources. The vast majority of history we were competing. Technology, and this is one of the points we make in "Abundance," is fundamentally a resource limiting mechanism.
Steven Kotler:It takes that stuff that used to be scarce and makes it abundant. The classic example here is photographs. Paper and film used to be really expensive, so you used to take only a few photographs. Now we have digital photography and the problem isn't like we don't take too few. We take too many, and it's a sifting and sorting problem.
Steven Kotler:We see this when I talk about energy, solar is a technology that's on an exponential curve. If you track that curve out, you see that we've started out, if you go back to 1975, solar was $101 a watt. Today it's less than 30 cents. We have solar selling last year in parts of the United States for cheaper than coal, and if you track the doublings out 14 years from now, we can meet 100% of our annual needs with solar.
Steven Kotler:That's neat, and then the following year, it's 200%. Then it's 800, 'cause these are exponential doublings, or every 18 months which is the curve solar's on. We're going to move from a scarcity mindset into abundance mindset, and one of the things we see in business a lot is if your business is based on a scarcity mindset, you're going to be out of business, because that doesn't work anymore.
Steven Kotler:Energy companies are rapidly moving into solar. I just looked at a list of all the coal plants that have been castled in China in the past year. It's something like 168 plants. It's a nuts number. Don't quote me on that number, but it's a nuts number. These are facts, and that's the core idea on that one. I don't know if I answered your question, but I did, I tried.
Daniel S:You actually spoke to something that is a very key framework for how I think we think about what has to happen to make a viable future civilization. If you're open to it, and forgive me for doing this impromptu live, I want to share two frameworks that we think about. They're directly related to what you're talking about and I want to hear your thoughts on them. This is on the fly. We haven't talked about these before.
Steven Kotler:Remember to use small worlds when talking to me.
Daniel S:You're funny. Okay. I'm going to do the higher up in the stack concept first and then the deeper one. You were mentioning the transition that technology created from something that was more scarce to something that was more abundant and the photographs as an example. Photographs as an example was the movement from photographs being made out of atoms to photographs being made out of bits, right?
Steven Kotler:Mm-hmm (affirmative).
Daniel S:Of course, there is a fundamentally different physics to atoms and to bits, and here is something that I think that people who talk about abundance, particularly out of Silicon Valley, particularly out of software analogies and Moore's law analogies, they miss the physics on. I want to share just the way we think about it and I want to hear your thoughts.
Daniel S:If we think about the materials economy, physical stuff we build the world out of, so not the human creativity economy, but everything else, we can think about all of it is three different types of physics. The physics of atoms, the physics of energy, and the physics of bits. Atoms, we have a fixed number of them. We can mine, get a little bit more, there's some consequences of that. We start capping out, we get diminishing returns on a number of atoms, right? We can start looking at asteroids.
Daniel S:There's a conservation of mass that is the underlying physics, right? We have to move to a closed loop materials economy where we make new shit out of old shit with atoms because we've got a fixed number of them, right?
Steven Kotler:Totally agree, yeah.
Daniel S:When it comes to energy, we don't have a fixed amount. We're going to more photons in every day, and they're going to be entropically burned up, so we don't need a closed loop on photons but we need to work within the bandwidth of what we get. We have to actually pay attention to what is the daily income that we have, and how do we work within that? Where the energy is what's being used to upcycle the atoms into new higher forms.
Steven Kotler:Agreed. It is also 16 terawatts of-
Daniel S:It's a lot.
Steven Kotler:... power, every hour. I don't think we're going to run out of sunlight soon.
Daniel S:It's a lot, right? It's a while before you get to higher numbers in the Kardashev scale of when you have to start harvesting sunlight in a different way. But compared to bits, it's still a very different physics, right?
Daniel S:Which is now with regard to bits, I basically have an indefinite amount of bits and bit per time limited only by the energy and atoms necessary for the computational substrate to store it and retrieve it. When I think about those, I say, "Okay, we have basically conservation on energy, conservation on information, conservation on matter as different types of conservation laws, different physics. Those three are not inter exchangeable for each other, they are not interfungable for each other. If I have one unit of currency across all three, then what we find is that I don't get exponential returns on atoms. I do get exponential returns on bytes because I make the software once and I can sell it an indefinite time. With the atoms I have cost of goods, right? With the energy, it's more bound to the physical dynamics of what it takes to even build the energy harvesting and distribution. It kind of shows up as a different in the middle. What we see then is, all the unicorns are software businesses. Silicon Valley started moving away from almost all things hardware and even a lot of energy to software because exponential returns beat multiplicative returns beat linear returns.
Daniel S:As a result, that's another example of the virtual bytes sits on top of the physical and we are debasing that which it depends upon financially here. There's a certain abundance that occurs in software by digitizing shit, but unless we go to a singleatarian universe where we live in a only pure digital space. If we still live in a physical space, the exponential returns of bytes don't automatically change the conservation laws on atoms.
Steven Kotler:[crosstalk 00:35:18] no.
Daniel S:Visualize it is ...
Steven Kotler:[crosstalk 00:35:21].
Daniel S:Factor those three uniquely.
Steven Kotler:I always say circular economy is a closed loop economics however you want to talk about it, it's the fundamental cooperative system we have to master. If we're going to pull this off we have to go cradle to cradle for the very reasons you're talking about. I'm amazed at how deeply those ideas are actually starting to penetrate into business these days. I was just looking at some numbers, Duracell had an eight year program on figuring out how to recycle batteries. They were recycling 0% of their batteries. Eight years of hard work on the problem, they're now recycling 4% of their batteries. You know, the atoms problem is very real. Okay, that's your first framework. Let's go to your deeper one.
Daniel S:Okay, the deeper framework. You said evolutionary theory's a critical thing to understand in systems theory. I want to add a third one that I think is the other one you pay attention to all the time. Which is, I'm going to come back and say it differently. If we think about evolution as a creative process, a process by which new shit comes into universe. Okay, the process of evolution as we generally understand it is an unconscious process. Nobody is designing it, it is a mutation in selection and the selection is kind of this radical parallel processing of what actually makes it through. It's extraordinarily slow, it's extraordinarily parallel, like an unbelievable amount of parallelization. Yet, almost everything fails, but the things that make it through have an un-fucking believable degree of complexity.
Daniel S:Specifically what comes through is what we call complex systems, self organizing systems and systems within systems within systems. If a predator over expresses itself relative to the prey, then it goes extinct. It's not even that a species makes it through, it's whole ecological niches that sustain themselves make it through. Right? Natural selection is over a long period of time actually selecting for the complexity of whole ecosystem. All right, awesome that's one process.
Daniel S:Now technology starting say with homohablis and stone tools. Let's just focus on sapiens because we killed all the other hominans. Which is a core part of this thing. Technology is not an unconscious, unparalleled, radically slow complexity generating. We can actually understand an abstract principal and then build that principal. We can make simple and complicated structures, right? If you cut me I heal. If you cut my laptop it doesn't heal. You burn down the forest, it regenerates. You burn down the house it doesn't regenerate. It's not self organizing, it's a blueprint from outside built by agents on the outside. As a result, it doesn't have the built in feedback loops of sustainability with the rest of the ecosystem.
Daniel S:Yet, we can build it fast as fuck and leverage it like crazy. Mathematically, evolution and design, what we call design, technology design are totally different math. We get complex systems out of one and complicated systems out of the other. Complex systems are defined by comprehensive loop closure, very slow, unconscious. Design is conscious open loops, very fast, complicated. We say now the relationship between these two is defining every part of our world and whether we go extinct or not. It's a really key thing.
Daniel S:The third part is, we've got to understand the evolutionary theory. We've got to understand how all the parts fit together, but then we've got to understand that there are parts you can't understand by evolutionary theory. The tools are actually a different math in evolution. Evolution brought about a creature that understood shit, made shit, not through evolutionary process, but through design process. Now why is this a big deal and how it comes to your abundance scarcity thing? I apologize for this because it's taking a moment, but this is actually one of the most exciting things for me and I'm really excited to hear you riff on it.
Daniel S:In evolution we have rival risk dynamics. The lion and the gazelle have a rival risk relationship in the moment of a chase, but their power is very symmetrically bound. Lions only get faster very slowly as gazelles also get faster. The micro rivalry of this lion and this gazelle being rivalrous or this lion and this lion being rivalrous creates macro symbiosis of lions and gazelles as a whole depending on each other. Either one would go extinct without the other. It's because of this power symmetry.
Daniel S:If I could make lions a thousand times more predatory within one generation, then they would kill all the gazelles and then go extinct. We see this with viruses that over express, or a new invasive species. It kills the whole ecosystem and goes extinct. The binding of power symmetry across the field is what makes micro rivalry turn to macro symbioses so you don't get rivalrous amplification. Now, we still have rivalrous dynamics from evolution, but we amplify the fuck out of it with technology as an amplifier, right?
Daniel S:Now, we get these power asymmetries where like the most bad ass lion is only a tiny bit more badass than the next most badass lion. Trump or Putin have a shit ton more killing capability than you or I do. Like hundreds of billions of times. That power assymetry is never seen in nature. A great white shark doesn't have mile long griff nets we can fuck up the whole oceans that took billions of years to develop in a very short period of time. We're modeling ourselves as apex predators, but apex predators with mile long griff nets and nukes. That have this symmetry unbinding.
Daniel S:If we take the rivalrous dynamics of evolution, multiply them by technological amplifiers, it's an unstable equalibria that self terminates every time we run the math on it. Unless we get something else, which is a process which is neither evolution nor technology. It is design of whole evolutionary systems rather than parts. It's getting the relationship between evolutionary rivalrous dynamics and at a micro level and the macro symbiotic dynamics and designing that in. How we have technology not be dystopian is that we actually have to change our relationship to what we think of as unconscious evolution. If the technology's in service of unconscious rivalrous dynamics, but rivalry on an exponential curve, it self terminates. That's the feedback. I'm curious to hear your thoughts on it.
Steven Kotler:I'm not sure it's a third feedback because I would tuck complexity inside of systems thinking. I know it doesn't sit there formally, I get that that they're two different things. In my brain, I think about them in the same way because whenever you're dealing with ecosystems, you're dealing with interactions of complex systems with complex systems. This is of course the great problem with exponentials. The great problem with technology is it's incredibly empowering, but you also put the power of God into the hands of a lunatic all the time. Really easy to do, I mean Andrew Hessland and myself and Mark Goodman wrote an article for the Atlantic Monthly a couple of years ago about how to weaponize synthetic biology. How easy it is for one person with a $5000 lab of used equipment to weaponize syn bio. Those are existential threats, which I know you spend a lot of time thinking about. I think they're very real.
Steven Kotler:It's interesting, if you ask Peter this question he will tell you that he thinks the fact that we're moving into a trillion sensor economy and there's nowhere to hide. Big brother is always going to be watching, that helps him sleep at night. Moving into an era of enforced radical transparency is his answer to this. I don't know what my answer to it is. Other than yes these are real phenomenon and clearly a black swan universe where people are empowered with exponential technology, it can go wrong a lot of different ways. As you said, it's a hell of a lot easier to break shit than to build shit, right? I think they're interesting questions. I think these are real questions. I think more than anything else ... Let's just say complexity science is a third framework that you need to add in with your evolutionary theory.
Daniel S:I was actually I was saying tech as the third.
Steven Kotler:Oh, tech as the third.
Daniel S:Complexity and systems I will put together.
Daniel S:Evolution is one. Tech is a process of creation which is not evolution that affects the whole system.
Steven Kotler:It's interesting, what do you think of Kevin Kelly's argument that tech functions as another form of life?
Daniel S:I think if you go to the macro picture you can see that it seems like it's kind of auto poetic, it's growing. It's not auto poetic, it takes something other than it making it and this is why my laptop does not repair itself. I repair it, and so it's because of its embedding within a complex system it gets repaired and it gets evolved. It is not self evolving, self repairing the same way a complex system does. The boundary of a cell, or of a gazelle, or of a tree is the internal and the external forces coming together to create a self organizing boundary between self other than self that's not arbitrary. Complicated systems are fundamentally different in that way.
Steven Kotler:Are you arguing from an action standpoint? Would this be among other things, a really solid argument for bio mimicry, for moving technology onto a complex framework?
Daniel S:[crosstalk 00:45:37].
Steven Kotler:Cradle to cradle. If that's where we're going with this, I 1000% agree with you.
Daniel S:[crosstalk 00:45:45] more ecosystem by design than parts by design. This is the key thing-
Steven Kotler:[crosstalk 00:45:50] Biosphere Two, didn't we start here like, okay ecosystem by design sounds awesome. Except for the fact we can't design ecosystems and I'm a little weary of geo- engineering.
Daniel S:As soon as we realize that we can't design ecosystems, here's why geo-engineering sucks. We get scared about temperature. We say we want temperature to go down. This is the one metric we're focused on, so we're going to put a bunch of stuff that reflects sunlight in the atmosphere, it's going to reflect a lot of sun, temperature goes down. Now are we addressing the underlying cause of temperature? No, we know that, so those things continue to progress. What are the externalities of the shit we put into the atmosphere when it comes into water?
Steven Kotler:[crosstalk 00:46:29] I agree, I agree.
Daniel S:The problem is, it's complicated system thinking which means what are the finite number of metrics that we use in the model and how do we do optimization on those finite number of metrics? The definition of complexity is no matter how you model the system, it has emergent properties that are not found in the model. Which means we actually have to start thinking about something beyond, and dimensional metrics and optimization algorithms across it. There's a new kind of math, a new kind of technology that we have to think about that's the reason we've sucked at it.
Steven Kotler:I've heard you say this before and I am agreeing with you. What I want ... Okay, this is no longer Steven talking, this is Steven Grillig Daniel. Which is give me the practical version of this. Theoretically I get it, I understand and I'm not disagree. Other than bio mimicry, other than the kinds of solutions we've already talked about, what are you looking at that looks real? How do we make it practical?
Daniel S:First, what it requires to do this is to have our technology implementation process not be run by rivalrous processes. As long as I have companies that are competing for first to market on AI, synthetic bio, et cetera, we're fucked. Like we're fucked. As long as we have countries competing on arms races or agents with separate balance sheets competing on tragedy of the commons things. Those multipolar traps where every agent doing what's best for them near term is what fucks the whole most long term. If anyone defects and does the fucked up thing they get advantage, so everyone has to do the fucked up thing. We have to solve multipolar traps, and that does require a new version of economics governance like deep shit.
Daniel S:I'm going to put that on the side for now because that's a whole consideration. I'm going to offer a very simple, one example to just see that there is actually a way forward. The intention in the process for comprehensive loop closure, information loops, causal feedback loops and atomic loops, just all the different kinds of loops. If I am seeing that cholesterol is high, rather than just give a statant, I want to know why is cholesterol high? When I look at the loop of how it is self regulating, if I just override the system, there's going to be effects of the statant other than cholesterol, open loop. There were the effects of why the cholesterol is high that I'm not addressing that are affecting other things, open loop.
Daniel S:I want to say, what is going on with the regulation? How do I get the regulatory system to work differently? That's a different kind of science, where I'm not just focused on making a synthetic molecule I can patent. I'm focused on understanding how the system regulates. If I'm thinking about geo-engineering, I'm think about why is it getting warmer? What are those loops? How do we address them? In any injunction that I would do, what are the effects of that and how do I close those loops? If I look at depletion of resources on one side and toxicity, pollution on the other, those are open loops in a network diagram that need to be closed.
Daniel S:If we just start comprehensively ... If I harm someone but I do better in the process, but I engender their enmity where now they start working on harming me. They reverse engineered the asymmetric tech that I used to beat them, so now they can do it next time. I'm opening a gain theoretic loop. In nature, there's this deep process of causal loop closure. Our tech doesn't close all the loops and we aren't thinking about it. All the externality is the open loop dynamic.
Daniel S:If we start methodologically saying, what are all the things that are being effected beyond the finite metrics we're trying to optimize for? That's where the externality is going to occur. How do we assess those and how do we start closing those loops? That becomes core to the design methodology. That's kind of like a bio mimicry like thing of this is something nature does, but we're doing it in how we do tech writ large. Not just how do we look at a rattle snake tooth to make a hypodermic needle, but the deeper process of how do-
Daniel S:How do we do bio mimicry of closed loop systems?
Steven Kotler:Yeah. Okay, essentially forgive me, when I talk about bio mimicry, I know everybody thinks about widgets. How can I use spider silk to make a better climbing rope? That's not what I'm talking. I mean that is what I'm talking about, but I'm interested in it at the level of systems in society for sure. I mean like my original, like I remember sitting down with Dixon Despomier, one of the originators of vertical farming. We had a like three hour conversation about how exciting it would be when we get to recycle our poop out of our toilets to grow our food. Like those are the loops I'm interested in closing at at least a city wide level. I don't know for a lot of reasons you actually mentioned, how you do this state, country. I can see at a city level that like you've got a shot at it. I'm interested in how to scale that up. I think that's a really cool, neat problem.
Daniel S:There's an important reason why you honed in on cities. Which is a city is actually a real thing. It's a self organizing system. A state and a country are made up things. They were won in wars or political battles where the lines are arbitrary. The city is all of the modes of production that lead to healthy-
Steven Kotler:Did you read Jeffry West's-
Daniel S:[crosstalk 00:52:12] of course. Of course.
Steven Kotler:I love his point that it's nearly impossible to kill a city.
Steven Kotler:You can kill a country, or you can kill a business, but cities are anti fragile.
Daniel S:They're actually complex, they evolved on their own, they weren't designed they're complex.
Steven Kotler:It's super neat, yeah.
Daniel S:The country and the state are complicated. They're supported by increasingly ad hawk [inaudible 00:52:38]. Which is why they don't live that long usually. If we get the loop closure at the level of cities, then we get the dynamics of interaction between cities to have closed loops.
Steven Kotler:Agreed. Agreed, and I think it's funny because when I started writing about X, you never get to write, at least in my experience. You rarely write the book that you want to write. You write the book that you have to write before you can ... It's all the stuff I have to tell you before I can tell you all the stuff that I'm really geeked about. Still, Peter and I are working on Convergence, the third book. We're still not at the level of this discussion. We're still not talking about cities yet or anything like that. That's really where my interest lies in terms of this stuff and going forward and how do you do this at a city wide level?
Daniel S:Do you have anything about that that's exciting you that you want to riff on further?
Steven Kotler:No, let's go elsewhere.
Daniel S:Okay. Thank you for indulging me on that rabbit hole. By the way, if you actually go to Steven's website he actually has this cool thing called rabbit holes as a top nav tab. You can just find a bunch of fascinating topics and go on deep rabbit hole journeys with him. I like that.
Steven Kotler:Thank you, you're the first person who's noticed it. Nobody else has noticed it.
Daniel S:[crosstalk 00:54:11].
Steven Kotler:I thought it was really cool.
Daniel S:It was the first tab I clicked on, duh. It's well designed.
Daniel S:Website was very well put together by the way.
Steven Kotler:Thank you.
Daniel S:I'm curious and if this is a bad question we can skip it ... You said you don't get to write the books you actually want to. Part of that is because you have to write to what people are currently interested in and it takes a while to build the frameworks to get them interested in other stuff. Part of it's because you have to write to what people currently understand and you've got to build the framework for them to understand the more complicated stuff. Part of it's because what you can sell a publisher on, what they think is going to sell, whatever right? Let's just assume that the audience listening to this podcast is giving infinite benefit of the doubt that whatever is being shared is actually worthwhile. They're interested, they want to take it in. What are some things that you would like more people to think about? To care about, to understand, than they currently are. Yeah.
Steven Kotler:I don't know that's a hard question. Like what do I ... The stuff that we've already talked about, the environmental stuff, bio diversity, like if that's the stuff I want people to understand. I mean at the core of my work, especially in and around flow and human performance and that. The thing that is so unbelievably shocking over and over is how much more capable we are than we actually realized. Like if there was any other idea besides the environmental stuff that we talked about, most people have no idea what they're actually capable of, right? There's an interesting sort of hidden tradition in like human performance. I always say that Nietzsche was the very first philosopher of human performance. It always to me starts with Nietzsche because he's the first philosopher post Darwin. He's the first guy who comes in and kind of starts to ... Darwin shows up and especially after he writes his book on emotion, his second big book and people go, "Oh my God. Consciousness evolved, the brain evolved."
Steven Kotler:Suddenly high performance comes in right there, right? Nietzsche's the first person who picks up on it and kind of takes it from there. Nietzsche's point, William James made the same exact point over and over and over again. There's this whole lineage of thinkers who are like, hey man you look at this stuff under a microscope and we are capable of so much more than most people know. Nietzsche thought it was worthless, he was like, "Look 10% of the population is who you should talk to, because everyone else, they can't hear me." William James said, "No, no, no. You're wrong, you just have to come in through habits." If you really, really want to train everybody up, you have to let people understand how much is unconscious processing and how much is habits. If you can get past that, and we hear a lot of those same ideas today. I'm right in that lineage. I mean not that I'm comparing myself to William James or Friedrich Nietzsche, but I'm in that lineage in terms of thinking. That's the only other thing that I would bring up.
Daniel S:Okay, so we can talk about the physical technologies that can enhance human capacity from bio tech and VR and whatever, right? We can also talk about the subtle tech that enhances human performance. The consciousness tech, the habits. For people starting to explore the upper limits of what is possible for them already now, that doesn't depend upon singularity stuff ... We're going to have people here who are deep practitioners and people for whom this is early. For people who are early, how do they think about what they actually are capable of? I guess this is actually a good question. Could everyone with the right training contribute to the world in a way that a Nietzsche or a James or an Einstein or a Da Vinci did? Is that unattainable? What are the upper limits of human potential if people figured out how to work with them?
Steven Kotler:The core question you asked is, does everybody get to be Einstein. That kind of like ... That is an interesting so far, I haven't seen an answer to it. Or I haven't seen an approach to it that I even trust. I don't know is the answer on that one. That said, like let's just talk about what we know and what has been proven in and around flow. We talk about flow and a lot ... The flow science dates back to the 1850s. Nietzsche had a word for flow right? He called it [rache 00:59:30]. Like it dates all the way back. [inaudible 00:59:36] the godfather of flow psychology who is at the University of Chicago running their psychology department forever, he's now at Claremont College as a director, was kind of one of the big foundational thinkers. From him forward, once he did his work and he kind of said, "Okay, flow is optimal performance. It's ubiquitous, it shows up in everybody anywhere provided certain conditions are met." In other words, this is how we evolve to perform at our best.
Steven Kotler:Once that kind of got laid down, the next people started to answer is, okay optimal performance, how fucking optimal? What are we talking about? We've got some pretty good, robust numbers in terms of what we know. Let me back up for a second and give you a macroscopic framework for thinking about this before we go into the specific numbers. One of the big lessons of 150 years of psychology and peak performance is that states of consciousness are just as important to performance as skills. In fact, most of the quote unquote skills that we value most in the 21st century, creativity tops almost everybody's list. Whether it's the most important skill that we can teach our children, it's on the top of the list of 21st century skills. Which is an educational foundation all the way up to the most important quality in a CEO depending on who's numbers you look at. Or who's studies you look at, creativity pretty much tops most lists as the most important thing we need to survive in the 21st century and to thrive. Yet, we suck at training people to be more creative. We're terrible at it.
Steven Kotler:We were involved, we meaning the Flow Genome Project and the Redbull Creativity Project. The largest meta analysis of creativity ever, they looked at like 30000 studies and they came to two overarching conclusions. One, creativity is the most important skill in the 21st century. Two, we suck at training people to be more creative. The reason is, we keep trying to trade up a skill. We need to be trading up a state of mind, a state of consciousness. Creativity, cooperation, collaboration, all these great 21 century skills, they're all about states of consciousness, and primarily flow.
Steven Kotler:What we see in flow, self help you know. If I've got a self help program and I can get you a 5% boost in performance and sustain it over three months, that's a big deal. I've got a billion dollar company, or at least a million dollar company. Flow is a step function worth a change. Productivity for example, McKenzie spend 10 years looking at how does flow impact productivity. They found on average you get a 500% boost. The Department of Defense, when they were looking at accelerated learning and flow, 470% higher than normal. Studies on the boost in creativity, whether it's work done at Harvard, University of Sydney or work we've done at the Flow Genome Project in conjunction with research done at USC, shows that creativity will jump 430% in flow. These are huge step functions worth of change.
Steven Kotler:Some of it is just that we've been trying to do this via skills and we're not. Like you need to do it via states, that's why you can get this big of a step function worth of change. You've got to stop and ask yourself. I hate to say ... By the way, it's very hard for me to talk about those numbers without explaining the neuro science of how flow works. I feel very exposed and I feel like I'm selling magical fairy dust, right? Let's just say that I feel bad because I didn't break down the neuroscience. My point here is that one, if we're dealing in an exponential world where things are speeding up and speeding up and speeding up, you absolutely need states ... You need this much acceleration and human capacity to keep up. There's no other way you could do it because everything's moving too quickly first and foremost
Steven Kotler:Second of all, what kind of challenges do you go after if you know you can be 500% more creative? 500% more productive, 430% more ... Learning can go up and blah, blah, blah. The point is that this is hardwired biology is built into all of us. We know how it works, we know how to get more of it, we know how to work with it. There's huge gaps in the research you can drive buses through, but we've got a handle on this stuff. What would you ... I don't know can you be Einstein, I don't know if you can be Einstein but you can be a hell of a lot more creative, motivated, accelerate your learning rates. I can keep going on and on. The other thing that flow does is, it massively amplifies meaning. If you want to go beyond kind of core skills into meaning, wellbeing, these things are off the charts in high flow lifestyles. To me, I look at those numbers and I'm like, I don't know if you can be Einstein, but you can sure be a hell of a lot more than you think you can.
Daniel S:Just recapping, the key things is that our skills are only fully possible to bring to bear when we're in the right state. Some things that we think are skills are actually not skills at all, they are the result of states. Because of the increasing complexity of the issues in the world, the increasing rate of change, being able to have increasing information processing is not just valuable, but necessary. Those are a couple key things on-
Steven Kotler:[crosstalk 01:04:51] yeah.
Daniel S:Needed in the gap.
Steven Kotler:Just to break it down, because you brought in information from processing and that's a really good way to frame this. I always say that this is primarily the result of A transferring processing from the explicit system to the implicit system and a couple of neuro chemicals. Flow surrounds the information processing system in the brain. We take in more information per second, so data acquisition goes up. We pay more attention to that information, so salience goes up. We find faster connections between that incoming information and older ideas. Pattern recognition increases. We find faster and farther connections between incoming information and other ideas, so lateral thinking or outside the box thinking increases. On the backend, if you're looking at the creative cycle, because it's not just enough to use all this information processing and come up with neat ideas. You still have to put it out in the world, that takes risk taking and bonus, risk taking goes up in flow. You've got literally every step of the kind of creative chain, information processing chain is surrounded and amplified by flow. That's what the state was designed to do. It had been designed to solve these kinds of challenges as you pointed out.
Daniel S:You mention the neuro science, the Rise of Superman and Stealing Fire both get into chunks of that. Then there's more stuff that's referenced on Flow Genome's website. Those are things I'd love for people to check out. I think it's critical and fascinating work. I have a couple questions. In the neuroscience, one of the concepts you discuss often is hypo frontality. Yet, obviously, prefrontal cortex is a huge part of what makes sapiens uniquely sapiens. It would be easy to say hypo frontal is good, let's just do prefrontal lobotomies on everyone.
Steven Kotler:It's a bad idea.
Daniel S:I actually long ago wanted to write a book called, The User's Manual for Your Prefrontal Cortex. Which was to know how to actually have an empowering healthy relationship with the thing that is very powerful that we use wrongly most of the time. Can you share a little bit on when do we want to be hypo frontal and what is a good use of prefrontal capacities? It's like the thing we call flow that is very embodied, kinesthetic is not the only state humans ever want to be in.
Steven Kotler:Yeah, by the way one first of all, people come up to me all the time and they want to live in flow. Or I get the other one, people come up to me all the time they're like, " Oh my God. Steven I'm always in flow. You should study me." I always tell people, "Look man, we've got a word for that. We call that schizophrenia." Mania, like there's words for that. You don't want that. The truth of the matter, and Jamie my partner you know very well, he's very eloquent about this. The information is in the contrast right? Flow's a peak state, it's phenomenal for certain things. You don't, if you only lived in a peak state all the time, it wouldn't be a peak state anymore. It would be just the nature of reality, right? Like that A. And B, when your prefrontal cortex is deactivated, what do you lose? Well, you'll lose your sense of morality for one. Flow over time will make you more ethical. But that's a prefrontal capacity, which is why soldiers get into flow all the time. Criminals get into flow all the time, right? It's a neutral state. So you want that sense of morality I think. And also long term planning, it shut down, right? The reason we fall out of time is because time is calculated all over the prefrontal cortex. Same thing with long term planning, right? When those things go away ... and there's an ongoing and like the stuff that I'm really geeked about are questions about like, does it all turn off in flow? Is the goal directed system that allows you to plan really Long term?
Steven Kotler:Is it totally deactivated? I don't believe it's totally deactivated. And one of the things I've been doing lately is going through, re-going through every single article that's ever been published on the neurobiology of flow, trying to figure out which of the goal directed systems are turned on and off for this reason. Because I don't think all of it goes away. But like, complex, logical, rational decision making gets turned off in flow. Like, you want to write in flow, you want to edit out of flow is how I describe it for myself, right? It's a phenomenal tool but it has its place.
Steven Kotler:You can have way more of it. You know what I mean? For a lot of people, flow is a really rare once a month, once, you know what I mean. It's a rare experience. If you know what you're doing, you can have a couple of two, three small flow state experiences during the day and that's what you want when you're at your core focused on what it is that you're doing, for example when I'm writing, or if I go skiing for the day, you know what I mean? Then you want it. But, it's not useful if you're having ... if I'm sitting that my wife, and we're going to talk about, like, how we're going to plan out our budget for the next five years. No, I don't want to be in flow for budget planning. That's a bad idea.
Daniel S:But you do for lovemaking.
Steven Kotler:You do.
Daniel S:Yes. So obviously, we want to be training all of the states that matter and our ability to move through them, right? Training all those skills and all those states. And here's the way I think about it. I'm curious to hear your thoughts on it, because I'm sure you've explored this a lot. The word flow, the analogy of it as a psychological state coming from fluid dynamics is very interesting. And when we think about laminar, right? Smooth flow of a fluid versus a turbulent flow. We're largely looking at what we could talk about as like a friction dynamic and it flows a state of a lot less friction.
Daniel S:Now, the way that we usually talk about flow is this very specific, kind of hypo frontal time dilated state. But I started thinking more broadly about just the ability to decrease friction in our own psychological, cognitive, physical, mental, emotional capacities, and states, kind of writ large. And so like getting rid of internal conflict, right? Getting rid of the places where old patterns that are just not useful. Long kinds of critic doubt functions come up, increases the sense of something like flow even outside of what we traditionally called flow. Because as you're mentioning editing, you've probably had times where editing is just terrible and other times where it's awesome even though the awesomeness has a kind of flow like quality. You're on, you're crisp, you're clear and yet you're still very prefrontal.
Steven Kotler:So, one of the things ... and this is one your language centers. Even though a large watch of the prefrontal cortex go shut down, partial language centers seem to stay really active. So it's actually like writing and editing and flow tend to work. You can do it. It was a bad example. I mean, it's true. But like you get some language function inflow. But yeah, there's just a bunch of different ways to answer this. But a lot of like, a lot of what you're talking about the decreasing friction, right? So, what we've learned about flow over the past five, six years, and we've been very involved in this work, the flow. You know this project, these flow states have triggers. Preconditions lead to more flow.
Steven Kotler:And you look under the hood of the triggers, right? Flow follows focus. It only shows up when all our intentions are right here right now. So that's what the triggers do. They drive attention into the present moment. Now, they do this in a number of different ways. A lot of them prius, norepinephrine and dopamine, which are focusing drugs, right? Like their pleasure chemicals, they do a shit ton of other stuff, but they're focusing drugs.
Steven Kotler:The other thing that these triggers will do, some of them is they lower cognitive load, they lower anxiety, right? That's what you're talking about. If you're removing friction, right? That's a really big deal, anyways. And I think by the way, it's just a big deal in general. Highly effective people, the people that I've spent my life kind of studying, are really great at lower in cognitive load. They're really, really good at that stuff.
Steven Kotler:And the other thing that I was thinking about when you were talking is, when you're in flow, the baseline brainwave activity, is at the borderline between Alpha and Theta, right? Beta is where we are right now. It's a fast moving wave. It's where we are when you're awake and alert, and whatever. Below that is alpha, slightly slower wave. It's day dreaming mode, where you're going from kind of thought to thought without a whole lot of resistance. Below that is theta, super slow. It's where you are in REM sleep, right? Flow's on the borderline.
Steven Kotler:Now that border isn't fixed. You don't stay, your brainwaves don't just ... you're not hovering and Alpha Beta. Every time you flow is called flow, right? Because every action every decision seems to flow seamlessly, perfectly, effortlessly from the last. Because it's kind of this accelerated high speed decision making. Whenever you make a decision, you go through what's known as a decision cycle. And at one point in that decision cycle, you automatically jump up to Beta. What we see people who are really good at flow is exactly what you were describing. So frictionless. They will drum up the Beta which is conscious processing, right? Your brain will go, "Are you sure?"
Steven Kotler:And people who are good at this, will drop right back down to that Alpha, Theta, borderline, right? And people who are not good at this will get hung up there, right? They'll get scared. Something will catch their attention and suddenly they're out of flow.
Steven Kotler:So there is, I mean, neurobiologically, you see something that is more frictionless behavior. But I think it's fundamental performance. It's not talked about very ... like nobody ... people don't go out and say, "Okay. One of the keys to your performance. What you have to know if you want to be a peak performer. How to lower cognitive load," right? Like, that's not, it's not sexy. It's not like ... but actually that's a huge, huge bunch of it. Really important.
Daniel S:All right. So many directions we want to go and I know we have limited time. So I'm going to try and go to ones that we haven't touched yet. You talked about high performing individuals and that there's kind of a reduction of friction between the different states-
Daniel S:So I know you're also very interested in high performing groups. So we're not only looking at flow at the level of one individual, but groups related to each other. Can you tell us a little bit about that?
Steven Kotler:Yeah, I mean, so there are ... as I said there were 20 flow triggers, right? 10 of them are individual triggers, or 11 of them, and our group flow triggers, they were identified mostly by Keith Sawyer. He's at the university North Carolina. So one separate mechanism, right? There is a group shared collective version of flow, right? It's called group flow and it's a small group people. When you're dealing with like a stadium, you call it communitas, right? Just depending on how many people are involved. And it's probably one of the ... so here's what's .... this is not what you asked. But this is the most interesting thing to me.
Steven Kotler:One route flow is the experience people will rate as the most pleasurable experience on Earth, right? It's the one people will take flow over pretty much anything. And [inaudible 01:17:10] a group flow over flow. And there's good data on this. Which is, one group flow is a group at its collaborative, creative best. So it is our best state for kind of collaborative innovation. It's the best, as far as I'm concerned, we're going to need to get really, really good at it to do all the things we talked about in the first half of this podcast. And it's what I find interesting is, there's a whole side of this discussion that is actually about consciousness and what is the nature of reality and all that thing. And, one of the first things I'm very ... I take an engineer's approach to a lot of this stuff. I want to know how does it work and how do I tinker with it.
Steven Kotler:And if you're an engineer, one of the first things you want to know is what are the boundary conditions, right? So group flow as the most pleasurable experience on Earth, the thing that we like the most, it's a boundary experience. It's a boundary condition for consciousness and for human experience. This is the thing we like the most. So it gives us a toehold into this really murky quagmire that I find interesting. I also think that it's probably the least understood phenomenon, human phenomenon that I know of is kind of group states. And you see it because when you start studying it, you have to work your way all the way back to kind of really simple kind of complexity behaviors. Birds flocking, right? Those kinds of questions get raised in group flow discussions as well.
Steven Kotler:And again, like I understand that flocking is three simple mathematical rules and everybody is fine how it works sorta. But there's still that kind of quality to the science. And we see it, it doesn't matter if you're looking at birds in the simulation or a digital simulation of the same phenomenon, it still has that, this shouldn't be working the way it's working and group flow has that characteristic. So I think group flow is interesting because it gets at a certain aspect of complexity that is still really interesting and mysterious to me. And it's interesting because it helped from a consciousness perspective. And obviously from an innovation like we need this to save the planet perspective. So, there's three different ways you can go there.
Daniel S:So, you mentioned that creativity is the most important thing and we suck at it. That was the result of the big meta analysis. We suck teaching it. And that group flow is the thing that we know the least about that is probably the most important to solving the grand challenges. So, what do you think about the future of education that gets good at the critical things that we suck at now?
Steven Kotler:Well, I've said for a while that I'm really excited about VR, and I'm really excited about the intersection of VR, AI and flow science. Video games are really good at getting the flow triggers. We know this, video games are addictive. The they produce flow, and that's what makes them so addictive. VR is much better at it. So it's really good. And we know department of sciences, as I said earlier, flow [inaudible 01:20:37] learning right? Quick shorthand for learning and memory and the brain. The more neural chemicals that show up during experience that a chance it's going to move from short term holding into long term storage.
Steven Kotler:Flow is this huge neural chemical dump. So things tend to get cemented. So VR is very good at doing that. VR is also a distributed education platform. All you need is goggles. You don't need schools, right? You can have one teacher at Stanford [inaudible 01:21:01] teach the world. Great, fantastic. And with AI, it can be individually customized, individual learning styles. That's fantastic. So we've got a individually customized, completely distributed, accelerated learning technology coming online right now. Fantastic. I love that. Like, that gives me a lot of hope.
Daniel S:Okay, I have a question for you, that pertains to VR and it pertains to flow actually, which is you're mentioning video games get turned to flow, and they're addictive together. And that's the interesting thing. And people who want to be in flow all the time or who are is mania. So the topic of hyper normal stimulus, right? If I have a lot more dopamine and norepinephrine coming in from something for enough period of time, I get decreased sensitivity on the dopamine, norepinephrine receptors. And so then, when I come back down to a normal level of stimulus, it actually feels subnormal and now I'm in a hyper normal response to what was previously normal, i.e., an addiction cycle has just been created.
Daniel S:Where I don't actually experience joy from normal things anymore and I need bigger and bigger hits of narrower and narrower types of stimuli. How do you see being able to have intense experiences create people that are the opposite of addicted? That are more resilient and that actually become more sensitized to normal experiences rather than hyper normal desensitized?
Steven Kotler:So I think there's three things going on and what you just said. One, [inaudible 01:22:39] talked about flow. He said, look. Those five neural chemicals I mentioned are, there the five most potent drugs on earth, right? And flow's the only time we get all five at once, just it for comparisons, right? Two of the chemicals that should open flow norepinephrine and dopamine, which you just mentioned, that cocktail is romantic love, right? That's tune that these pleasure chemicals flow gives you five. So in terms of like romantic love, we love that feeling. We're into it, right? Flow is bigger. But he says it's not an addiction like a negative addiction that leads backwards, drugs, gambling. It's an addiction that leads forward because flow is always demanding that you push your skills to the edge, to the use of the utmost. His argument is that it's an addiction that leads forward.
Steven Kotler:You raised a question that nobody's looked at. Now we have teamed up [inaudible 01:23:27] lab to actually do this work, to start this work. Where we're going to go into the addiction literature and see what we can learn about flow. But nobody's done the scans to look at like dopamine receptors are ... if you have a very high flow lifestyle are your dopamine receptors actually getting less active over time, right? Open question, nobody's looked, that I've seen. I've never seen the work. So, does the addiction to flow actually help create the same sensitivity problems in the brain? Open question. That's not the answer to your question though. The answer your question is a different thing.
Steven Kotler:And this is an opinion I've got no zero data to back it up. But I have noticed it over time and it's worth talking about which is that people who are really good at flow and really good at using these states, do not ride the highs very high. They're [inaudible 01:24:26] myself as well. I'm less interested in the pleasure chemical than I am in what it does to my thinking and my performance, right? The actual high portion of it, I've discovered is, I don't want to say dangerous, but it's the sick ... if I ride that dopamine high, all the way out, I'm using up all development my system and it's going to be harder to get it next time. So there is ... flow itself as an effectively neutral experience, right? And the reason is, you need emotions to modify behavior. In flow everything is working perfectly, you don't need to modify your behavior. So it's essentially an effectively neutral on the back end is where really the pleasure stuff starts to come in and you start to notice it. When you're in flow, you don't actually have emotions that you're really noticing. It's sort of as it starts to taper out. Oh, you're getting that big pleasure. I have found that like people who are really good at this almost ignore the pleasure component. In the same way that you learn to ignore the [inaudible 01:25:29]. You don't want to ride too high or too low emotionally over time, because it's just going to get in the way of sustained long term peak performance is what I found.
Steven Kotler:This is not to say that you don't experience joy or [inaudible 01:25:44] that's not what I'm saying. But there's some part of the experience that's sticky. Where you can feel the kind of, you can feel the drug aspect of the flow experiences. And I have found that not attaching to it is useful over time, meaning it will produce more flow over time, right? Same thing with like riding out.
Steven Kotler:I always tell people, in writing. I try run a class called flow for writers. And one of the things that we talked about there is an idea that ... I heard it from Gabriel Garcia Marquez, I use it, Hemingway talked about it. But quitting writing when you're most excited, and that's essentially quitting in an inflow, right? What the neuroscience fact is that excitement is dopamine and norepinephrine. These are limited neural chemicals at their peak saturation, they're going to last 20 minutes in the brain. So by the time you notice them, you're coming to the tail end. If you try to ride that kind of, that high out and the increased pattern recognition that you also get from the dopamine, it seems to deplete all of it, and it makes getting back into flow the next day or the day after that a little harder.
Steven Kotler:So I've found that to sustain this over time, you don't really want the highs ... like you're going to ignore the emotional content. You want it for what it does for your performance. And certainly it's better than the other options. But like I try to stay a little unattached to the emotions of it. Does that make any sense whatsoever?
Daniel S:Well yeah I mean, first thing I want to say is that you being one of the cutting edge researchers on flow and saying we don't know this is at the edge of our knowledge a lot, I appreciate it.
Daniel S:But you just said something very interesting here, which is when we usually talk about flow as a full immersion in the moment. And then you just started saying the people who kind of do best at it notice what is actually going to be good over time and what is going to be meaningful over time, there is actually a balance on that dialectic, right? They're not getting into the moment is all there is, which is going to be problematic, right? Again, you can just lobotomized the prefrontal cortex, we don't want to. So there is a ... what is the relationship between our depth of experience in the moment and our awareness of our potential for depth of experience in all future moments? And how do we-
Steven Kotler:This is what I said to you earlier when I said the cutting edge work on flow is around the goal directed system, right? How, like in this state, is it? Are you really shutting off your ability to think long term and where this gets really weird. And the reason I'm ... so one of the like ... where I'm going to three books from now, what I've been researching for years is intuition. And one of the things you have to answer about flow is that there's a couple different intuition systems in the brain. And we're really familiar with the Blink Malcolm Gladwell rapid cognition, right? That like I look at a statute I know it's a fake.
Steven Kotler:And we pretty much understand that. [inaudible 01:28:48] like it's pretty clear. [inaudible 01:28:52] established that if you're an expert in a field trust rapid cognition, and if you're not an expert in the field, don't trust your rapid cognition. That's pretty much like the moral of the story. That's what we know.
Steven Kotler:But in flow, you will get really long term intuitions like the title of a book will pop into my head and just a sense of it, right? And that'll come up in flow, that happens to everybody in flow. We call it inspiration, right? That's long term goal setting, right? That's the very thing that supposed to be shut off in flow, right? And yet it's really common. Everybody has this experience. Oh my god, I had an intuitive idea in flow about this thing I should do and I'm going to devote the next 10 years of my life to it. Happens all the time. Burning Man just ended. It just happened. 50,000 people at Burning Man, they've all come on with their hair on fire about some weird ass thing they thought about [inaudible 01:29:43], right?
Steven Kotler:This is sort of long term intuition. So there's something funky in the neuroscience of this in terms of how flow works, how prefrontal cortex handles goals over time. And I think that's another real mystery in brain science right now is kind of the long term ... the goal directed system over long timescales. It's an open question, right? And it's weird because we know Kelly McGonigal's research found that most of us have a very hard time seeing 10 years into the future. And we know by the way, I don't know if you know this or not, but when you imagine yourself 10 years in the future, 20 years in the future, you're imagining a stranger. The medial prefrontal cortex, part of you that imagines you turns off and you're literally ... it's why people can't stay on diets or whatever, because the person who's going to benefit from the diet is not even you according to the brain.
Steven Kotler:So these systems are really weird, not well understood. Flow is tied in with them. Intuition is tied in with them. Critical for long term peak performances survival of our species in giant mystery scientifically.
Daniel S:Okay. So maybe one last topic I want to get into and then I know we're running out of time here.
Steven Kotler:Yeah, we already-
Daniel S:So, it seems like an awesome thing to be in a VR environment where I can customize the environment to my optimized learning and then I customize learning to me, that seems awesome, right? Now say I start spending a bunch of time in an environment that is perfectly tailored to me. Then I come out in the world that is not tailoring itself to me. I'm like, fuck this I want to go back into the world where I'm the center of the universe and the AI is making the universe for me. It's kind of like it seemed pretty optimal to make comfier and comfier couches and more and more simulating entertainment.
Steven Kotler:Like I have said for a while. I don't know if Ray Kurzweil singularity ever happens. Like, I don't write like ... I don't know about that singularity. But the one I am paying attention to is there's going to come a point in time very soon when we can make experiences that are as pleasurable, as meaningful as real world experiences in VR. And once we cross that threshold, right? And we're, I mean, we're really close to that threshold right now, right? The meaningful is an open question. Pleasure, we're like, we're already there. People like VR more than like, regular reality. That's the singularity I'm paying attention to.
Steven Kotler:Because, we all saw Existence. So, if you haven't seen the movie Existence, you should go out and see the movie Existence immediately. But that's a really interesting point in time, right? Like that is the question of sometime in the century do we check into the matrix to never come back because it's more fun in there? I have no idea. And the other question is, is that a bad thing? Like maybe we use less resources. Maybe it's easier to build circular economies, if we just check it in to the matrix. Or by the way, if [inaudible 01:32:44] is right, we've already done that, and this is just our simulation of the simulation of the simulation.
Daniel S:So beyond the scope of this conversation, at some point, I have a strong refutation to this strong simulation or argument which is-
Steven Kotler:Oh, I'd love to hear it. I'd love to hear.
Daniel S:Another part is discussing the ethics and existentialism of, is it a bad thing? Is it a good thing? What is the basis of the words bad and good in the scientific framework where we only say what is not what ought and what is the problem of being able to engineer the most fundamental levels of reality without any actual North, without any compass.
Steven Kotler:By the Sam Harris would disagree with you. He would say that ought is built in, but that's besides the point. I get what you're saying.
Daniel S:Yeah, the moral landscape says give me the connection between first person and third person and I will use third person talk to my first person. And that's a very long argument also. But my general take is that higher order systems that emerge out of low order systems and depend upon them need to enhance not debase the low order systems they depend on. And so human technology actually has to serve ecology and make a richer, more robust ecology as we talked about earlier, and that our relationship to the virtual world should make us navigate the physical world better. As opposed to our social tech should make our real world relationships better.
Daniel S:If they don't, if the social that depends upon real human relationships to build the machinery and the money to have it and whatever, if it debases those relationships it's in its own suicide process. And so basically, the overarching principle is that higher order emergent stuff should be in a feedback loop enriching that which it depends upon. And so for all of tech it is and this is coming back to the evolution, your two systems, right? Evolution and whole systems together and then the interaction of them with tech.
Daniel S:All the tech that we're making enriching the way that we actually navigate foundational reality is I would say a key over principle to that question you were we were discussing earlier.
Steven Kotler:I think you're right. I like the idea. By the way, I have no idea how you would convince people to design virtual reality to enhance reality, right? I love what you're saying it's really smart. And I agree with you. I don't know how like ... and I don't know like ... it's not that I don't think we can do it, it's that I think we're going to make every mistake in the world and then we're going to do that. I wish we didn't have to ... you know what I mean? Like, we're going to make virtual reality systems that alienated us from one another as much as they bring us together, right? Until we figure it out.
Daniel S:Here's one of the underlying dynamic bitches of it is. If I run a business, I want to maximize lifetime revenue from a customer. So that means if I have a source of supply, I want a manufacturer increase demand. So addiction is profitable as can be, right? We see this in drugs and food services and whatever, right? And so as long as ,if I'm supplying something to you like Facebook, or like the VR version of customized Facebook or porn or whatever it is, as long as I'm in a rival risk relationship with you, I'm a company you're a purchaser, the more time you spend on Facebook, statistically, the more likely you are to kill yourself or whatever other studies come out of Stanford.
Daniel S:So if I was doing what was best for you, I'd give you the information and get you the fuck off. But what's best for me and marketing is to have you spend more time on here, but I've got the AIs that beat Kasparov on chess optimizing for stickiness for your mind attention. We have an asymmetric information warfare going on where you being addicted is bad for you and good for me in a supply and demand reverse dynamic. So as long as that type of capitalist structure is driving the tech that has that much asymmetric advantage, then it will be used for all the worst purpose as possible because they're actually best for the supply oriented economics of it which is why I say we actually have very, we have deeper underlying what is the ecosystem, what is the incentive of agents?
Daniel S:If the incentive of agents to do stuff that fucks the other agents with exponential tech, exponential fucking each other, blows the whole system up. And so it's a framework question.
Steven Kotler:I just like the phrase exponential fucking. I don't know about [inaudible 01:37:34] else. But I like the phrase exponential fucking. I can work with that.
Daniel S:Group flow states of a new kind.
Steven Kotler:You're not wrong. The problem is, we make that argument out loud, you sound like a socialist or communist.
Daniel S:Except that then we can make the argument why those systems fail too. Why we have to come up with totally new sheet.
Steven Kotler:That's my point, right? Like that my point is that like it ... I don't know how you have that. To have this discussion you have to talk about socialism, you have to talk about communism, you have to talk about universal basic income. You have to [inaudible 01:38:16] right. You have talked about can you build, can you create networks without network effects and platforms, right? Those are the questions that you have to start answering to go into that world and I don't know what those answers are.
Daniel S:Some time we should have a red right we should have the right people together and the right timeframe to actually have those dialogues. That would be fun.
Steven Kotler:It would be really fun. That would be really fun.
Daniel S:Steven, it was a blast having you. Thank you so much for making time. I know you are busy training for ski season and writing books.
Steven Kotler:Hey, this was fun hanging out. So it's always good to see you.
Daniel S:Likewise. Hey, listeners. If there're any questions you have for Steven mostly don't ask him. Just go to the website, read his books. He's probably answered them already. But if there are any that are really interesting that haven't been addressed in those places, send us at [email protected] on it and if there's some really interesting stuff, maybe we'll get him to come and do another follow up deep dive.
Steven Kotler:Happy to do that. You can also find me on Twitter. I actually do answer weird ass questions on Twitter.
Daniel S:Good to know.
Steven Kotler:I found the plat ... the platform itself is as like all social media it's really annoying, but it's really good for philosophical discussions about consciousness. Like arguing consciousness if you're limited to like 200 words it's useful considering the argument the rabbit holes you go down. I find it to ... it's a good limit.
Daniel S:I'm going to look at your Twitter feed and curious about this now.
Steven Kotler:Yeah, it's a little weird.
Daniel S:All right. Thank you my friend.
Steven Kotler:Take care of yourself. Be good.
Daniel S:Take care.