Why Collective Intelligence is Vital to Navigate Existential Risk Today

Why Collective Intelligence is Vital to Navigate Existential Risk Today

What follows is a transcript for the podcast HomeGrown Humans - Daniel Schmachtenberger - Existential Risk.

Topics within the interview include:

  • The most important problems in the world today
  • How technology and social media are undermining sense-making
  • Why we need to reinvent democracy
  • The world's energy catastrophe
  • Where have all the super geniuses gone?
  • How to maintain existential hope in an environment of crisis

Jamie Wheal: Well, welcome to the next addition of HomeGrown Humans with Neurohacker Collective. And this is a very full circle kind of conversation. Get to speak with a dear friend and colleague and fellow instigator of all things, future facing Daniel Schmachtenberger. One of the founders of The Consilience Project and organization dedicated to improving the world's information, ecology and bettering our collective decision making so that we can forge futures that we all want to live into as well as part of the founding team of Neurohacker Collective itself and a generally all round thoughtful and good guy. So, Daniel Schmachtenberger, welcome to HomeGrown Humans.

Daniel Schmachtenberger: It's good to see you going to be talking with you, Jamie. It is a fun, full circle thing, given that the first instantiation of this podcast ever, I was hosting five years ago or something like that. And then Heather did and then we started to grow it and yeah, it's cool. It's cool to be here talking with you.

The Most Important Problems in the World Today

Jamie Wheal: Absolutely. Well, listening to me, there's an infinite number of directions that we can go. But one of the ones, when I just busted out my notebook that came to leap off my pen first was just curiosity on your sense of how the kind of quote unquote the conversation. And by that, I would say we can refine it to all things tending towards both existential risk, future studies, positive interventions at any level of the human cultural stack that we're playing with. What's your sense as we're not really out of the whole COVID woods, but everyone's just playing through at this point. So, it feels like we are somewhat on the other side of two and a half years of quarantine, and I'm just curious if you've got an answer or we can just explore this out loud, is what feels like it has shifted or changed.

What have you noticed from say where we were in January whatever it was, was it 2020 that we went into lockdown? Is that right? So we went in February or March, right? So what going into that would chipper of existence what's changed or shifted? Because obviously there's been so much and I think we've probably already started forgetting the layers as they've come in, but what are you tracking that is different now in the summer of 22 than when we were all caught on unawares with a two week lockdown in late February of 2020 within this conversation and our friends, our colleagues, and just what you're sensing in the zeitgeist of the space.

Daniel Schmachtenberger: It's an interesting question because the initial frame of what things are in the conversation, it actually seems like a pretty small number of people that think about existential risk and all things positive and state of the world as a whole, like as a single conversation, I would say the biggest changes that I've seen in terms of people's sense of maybe what that conversation has in common is people who are thinking about humanity in the world globally, what are the major issues that we face rather than in a specific sector or by a specific nation or to a specific audience of people, but what are the things humanity faces globally?

And what are the ways of addressing those that might be adequate or meaningful and how do those global issues contextualize other things that might be meaningful? I would say COVID but even before COVID the Trump election. I noticed a lot shift in my conversation publicly, but also in my conversation with a lot of institutional heads whether academic government, big business, whatever in their sense of there are problems, but there've always been problems. This is kind of business as usual to the sense that, oh no, we're actually entering a different phase where Pax Americana and the post World War II world seems to be shifting radically.

It seems like the degree of bipartisan like venom and polarization associated with the Trump election had a lot of people who had before that still been arguing their problems, but the country works fundamentally fine. It works better than any other country to like, "Oh, maybe there's actually an existential risk to democracy itself." Maybe social media technology really did break the fourth estate in a way that breaks democracy, but is fine for autocracy. I started noticing much more serious consideration regarding like the fundamental viability of these social systems that we've depended upon liberal democracy and things like that and then the so much of Australia coming on fire.

And then everybody forgetting that because of COVID when that was the biggest thing ever. And then the secondary shocks of COVID from the supply chain effects to the economic effects to that. And then I think definitely January 6th and all of the George Floyd related protests and that suite of things seems to have made the small group of people that saw that these current world systems were untenable and unwinding made that pretty popularly, if not believed at least something that a lot of people were sensitive to.

And particularly the people who have been in the running the institution's position, that is something I find very interesting. So, what I've found is that I've had way more people asking about things like how do we fundamentally redesign our political economy? How do we build a world where democracy is viable in a post social media and pretty soon post deep, fake world and asking in even getting a little bit more future oriented, what does the future of economy and education and everything in a post technological automation world look like? Those are questions now that there are very few people in institutional positions not interested in and there were very few very interested in them before.

Jamie Wheal: Yeah. That reminds me of that University of Chicago philosopher, Jonathan Lear, who wrote that book Radical Hope. And I referenced him at the end of recapture the rapture, but he said the inability of a civilization to conceive of its own demise is one of the blind spots of any culture and it does feel it's with I'm tracking what you're saying. You're suggesting that 2016 through now, including both the political disruptions their Brexit, Trump Gen 6 but also the psychosocial governmental policy, everything of quarantine has just massively stretched the Overton window. Yeah. As to what quote unquote mainstream or polite opinion is willing to entertain at this point.

What Does an Economy That Does Not Externalize Costs Look Like?

Daniel Schmachtenberger: Yeah, totally. And I know our friend Nate Hagens was just out with you. And there are some things that are still really far outside the Overton window of thinking about adequate solutions too. And so, I'll just take his as an example, because maybe people here are familiar. One of the things that he looks at very deeply is that the GDP of the world, the total size of the global economy is pegged very tightly to the total amount of energy used, which makes sense, right? Whether you're any good or service was going to require some energy. So it's kind of tightly pegged. We say, "Well, what about efficiency? Shouldn't we get more dollars per jewel of energy with efficiency?" Yeah. But at a smaller rate of changes in efficiency than the rate of increased demand of energy from the increased demand in finance and those efficiencies lead to using more energy.

Jamie Wheal: Is that Jevons paradox?

Daniel Schmachtenberger: Jevons paradox anytime you get an increase in efficiency of energy, you use more energy because when energy becomes a little bit cheaper, you get new markets that open up and the energy return on energy investment needed to get us off oil means you got to use a lot of oil and coal to make those solar panels. It's going to take them 10 years or however long to make enough energy to pay off the energy it took. It just doesn't converge. And so then we say, "Okay, well this is like a key insight is that somewhere around 2019, we started getting very clear diminishing returns on hydrocarbon investment, meaning it takes more barrels of oil to get a new barrel of oil." It's not-

Jamie Wheal: Specifically like plus sands and fracking and all the diminishing returns on those original Bonanzas?

Daniel Schmachtenberger: Exactly. And so, if we're in a place where we can get more oil, but we're getting lower energy return on energy investment which is going to continue to happen while having a need for an exponential amount more money for the financial system to keep up with its own interest and the money and the oil are correlated, that thing doesn't get to keep going. And so, then there's a reckoning in where the price of oil has all the externalities cost in the environment. We're not making more hydrocarbons of that type and we're not dealing with all the waste of which climate change is one of the waste streams, but not the only one obviously oil spills and mountaintop removal mining for coal and everything. So, if you were to say, "What would it cost to pay for oil with real costs, right? Where we could sustainably keep doing this forever." We're not actually net harming the environment depending upon which way you analyze it. It's $10,000 plus per barrel of oil and that makes it.

Jamie Wheal: So wait that's an interesting data point. So you're just saying, if you actually priced in both cost of acquisition, restoration, distribution, pure refinement and distribution, and then all carbon offset that would be required to do this in a sustaining way then you're at 10K a barrel.

Daniel Schmachtenberger: 10K to a million, depending upon how you factor all of the life cycle analysis.

Jamie Wheal: Okay.

Daniel Schmachtenberger: And so what that means is that the cost of my shoes and the cost of the computer we're talking on and everything just became the entire market of everything broke.

Jamie Wheal: Yeah. So, you basically buy a pair of shoes and you get them resold indefinitely, a pair of really good shoes? It changes purchasing decisions.

Daniel Schmachtenberger: And the computer we're talking on that rather than being a two or $3,000, apple might be like a million dollar apple. If you factor the real cost of what it takes to for the supply chains to make this thing. And then you're like, "Okay, we would actually not even do manufacturing the same way." We only do manufacturing this way because we've been able to externalize all the costs to the environment. Now, we're hitting planetary boundaries. We're not going to be able to keep doing that. If we were going to internalize the costs, we didn't externalize like half the costs. We internalized like four orders of magnitude, more of the costs than we internalized.

And so nobody is thinking seriously enough about what the fuck does an economy that does not externalize costs look like and how do we retool our total global infrastructure in that way? And how do we do it in the time we have factoring that we're already in diminishing returns on hydrocarbons and everything is oriented towards growth and you have to completely change the financial system so I would say at least conversations like that there are more people listening to them and bothered by it that doesn't mean there's anything like a transition plan on those things that is starting to get realistic.

Jamie Wheal: Yeah. One of the fellows who attended our camp was actually a really goodhearted DC policy analyst and his partner is Tibetan origin she's working deeply in the house and the Senate as well. So, they're way into the policy side of things. But interestingly had come that familiar with your work as well, familiar with Nate, that kind of thing. And it was just very interesting to hear their inside baseball on how the policies like [inaudible 00:12:57] if people knew that this was all just overworked, Ivy league, early 20 something, a congressional aids that are fundamentally moving this all through the system, people will be shocked and horrified, but I'll tell you what I want there's so much, you've just thought. You've just teed up here.

This is probably one of the main threads of the conversation, but before we jump into it, I just want to also ask you about the other side and I'll play my card first, which is I have noticed a staggering number of people in the last two years get sucked down, conspiratorial, rabbit holes and people otherwise thought were saying and grounded are now in the tank for one particular reality tunnel, conspiracy theory or another, including otherwise fairly centrist, public thinkers, et cetera, et cetera.

So, whether that is convinced of some grand Davos led great reset conspiracy bundling in the vaccine, concerns bundling in the deep hermeneutic of suspicion, all round and therefore veering, precariously too dangerously to irresponsibly into some form of acceleration as a meaning let's hurry up and get this thing over, let's help start tearing it down because the big time baddies are out to get us. So, first let's just talk about that. What have you been noticing? Because I mean, I've been shocked about how just divergence among friends and colleagues that I would normally even while we have a variety of different political and philosophical perspectives.

I always took them as good faith representatives of their stand. And we know some filmmakers documentarians, right? That there's a whole swath of these folks, podcasters, what is your sense of that? Because it's one thing to say at the bottom of the barrel of Reddit and HN, and Telegram, you've got some kooky folks saying some wacky ass names, but it feels like that has bubbled up and has now been capturing a staggering number of otherwise what I would've thought of as reliable, independent thinkers. What's your take on that? First of all, are you experiencing it? And if so, do you have any sense of mechanism?

Daniel Schmachtenberger: Yeah, there's a lot in there and it is true. There was obviously already a trend that had been strengthening for some time since like post to Vietnam on increased, left, right polarization and also increased pro versus anti-institutional or pro standard model anti standard model orientation obviously we saw this pick up heavily with social media because of the nature of how social media works. But in this timeframe we saw it pick up a lot and I would say, "The leading up to the Trump election post and then COVID definitely, but I don't think the way I would say it is, there's a lot more people that are conspiratorial because that's part of it but there's also more interestingly blinded standard model support simultaneously."

Jamie Wheal: So, that's a mouthful so unpack that.

Daniel Schmachtenberger: The whole instant zeitgeist washing on it's not a lab leak. It couldn't have been a lab leak. The science has settled. If you think that it could have been a lab leak or engaged the hypothesis, you are doing some weird xenophobic conspiracy theory.

That was a really radical push in the zeitgeist that was not founded. The science was not settled. It was a totally reasonable hypothesis, whether it was true or not, it was [inaudible 00:16:45] something that needed to be explored because if you just have that big of an effect on the world, of course you explore all those things. And if we're doing gain of research and synthetic biotechnology and we had an instant somewhere, we might have more of those instances. So, it should absolutely be a totally realistic conversation and the suppression on the conversation and that's what I would call there was a standard model narrative.

And there were more smart people that bought the standard model narrative without realizing how ridiculous it was while there were simultaneously more that assumed that it was baby eating people that were running a conspiracy against it. I see those as equally concerning because I see them both as kind of conspiracy. Isn't the essence of what I'm focused on. What I'm focused on is where people's sense making is getting more captured by some narrative.

And so, the mechanisms that I would say are in place is one increasing complexity makes it impossible for anybody to process all the information. So they have to decide some things about who do they trust. And of course you want to trust the CDC or a trustworthy institution, but if you don't, then you want to trust whoever seems like they're doing the research of why you can't trust them. And so, you get that kind of person because who the going to go through all the CDC data and then compare it with the various database data and then compare it with this investigative journalist, not to mention the IPCC and climate change and the social justice cop shooting things and to have a self informed opinion on all these things. And then to question, is the data even good or was there underlying bias in the nature of what was funded in the research?

So, all of the data's on one part of a bell curve, unevenly distributed. And so, I would say the just sheer work and epistemological nuance, it takes to make sense of things this complex rules, almost everybody out. And then the consequentiality of it makes you want to not just sit in uncertainty, but have some sense of it. And then like the emotional in group out group polarization around and this is again, one of the social media things that anyone who's questioning if the vaccine was the right approach for public health can get thrown into some tinfoil hat wearing antibacterial thing like that everybody's going to die of whooping cough in smallpox because of these crazy people when some of them are that and some of them have a pretty nuanced view of like, "Hey, I think there's more that we could have done on small molecule research."

And there are reasons why doing that small molecule research would've made sense and there's more safety analysis on vaccines. So, there's something where it's like, we see the craziest left pink haired, Antifa people, and we see the dumbest, most bothersome mega people as if representative of the whole groups that then makes the groups polarized against each other. And so, I think the complexity, the total number of topics, the speed, the crappiness of the underlying scientific substrate with replication issues and all of those things, the media environment that orients towards the hyper normal stimuli, which will be confirming bias and sanctimonium and in group identity and giving us the most extreme versions of the other, the extreme, and strong man never steel man versions of the other side.

I think that whole suite of things leads to this where the number of people that I knew, who I felt like were doing really a good job of sense making, say COVID related issues, whether it was lab leak or whether it was proposed small molecules like ivermectin or whether it was vaccines or I saw some people at the very beginning doing a good job of not getting captured by one side or the other. Almost all got progressively captured by one of the sides. Simply because their ability to pay attention to everything they just couldn't and so certain choices ended up give making the newsfeed of information coming their way more oriented in a certain direction and it became very hard to not be affected by that.

I always think of almost like a World War I, no man's land trench warfare and it feels like the trenches are getting pulled further and further apart. And I'm sure you've got the accurate stats, but I feel like it's like less than 5% of all social media posters or the majority like art make 90% of the news. So, it is all the extremes and as the no man's land gets bigger and bigger and more and more treacherous tank traps and bob wire and nerve gas, there's no fucking way you're going to make the sprint to the far side to jump in that foxhole, whether your potential enemy so that like no mans land's dead.

Jamie Wheal: So there are no atheists in foxholes, right? So you got to believe something if you're jumping in with and you're more prone to jump back to your closest definitely affinity group than you ought to risk crossing the middle. And just seems like that is that the price of defection from my identified in group is so high at this point so for instance, you could have a whole bunch of live and let live middle Americans that are nonetheless getting pulled aggressively, right wood because the pink haired baby killing Antifa, hooligans are such that are such the caricature of the left at this point. That people are like, "I don't know which way to go bu the moderate middle is just getting completely hollowed out."

Daniel Schmachtenberger:  I actually don't have these stats, Tristan and Aza were telling them to me, but it had to do with... It was really cool social science that was-

Jamie Wheal: The Center for Humane Technology folks for anybody follow, yeah.

Daniel Schmachtenberger: Yeah. And it had to do with pulling people on the left about what they expected, the percentage of certain things on the right were and vice versa and seeing how badly off in a negative direction they were something like asking people on the right and I think it was more specific. It was like asking Trump supporters at a rally, what percentage of Democrats are LGBTQ themselves? And they thought it was like half or something like that. And it turns out it was less than 6%. And there were similar things that people on the left what percentage of people on the right want to cut all social services or whatever. And they estimate it to be way higher than it actually was. So you could tell that they were both dealing with caricatures of each other, that they thought were true, that like they were dealing with two standard deviations to one side of the go and distribution of what was really there thinking it was the median.

Jamie Wheal: And a Gaussian distribution is fundamentally the familiar bell curve that people understand, yeah.

How Technology and Social Media Are Undermining Sense-Making

Daniel Schmachtenberger: So, and this is Pat Ryan's stuff he was talking about this years ago on other quotes that you would... He was predicting that we'd have this phenomena, that things like calls would just start automatically popping up as a function of the information technology landscape. And I would say the underlying cause of this or one thing we have to factor is that of all of the singularities that occurs well talked about the info singularity is the first one, meaning the singularity, meaning the point at which things change so radically that we can't really predict what's going to happen or how to respond to them afterwards and where humans can't keep up with the pace of technological change. The idea of the information singularity is it's so much information would be published that no human could process all the relevant information about anything.

And so, there would no longer be an expert and that's already true, right? It's already true that you'll have more peer reviewed journal articles within a specific domain because of how big the world is. And university's all around the world than any expert can keep up with even in their own domain and obviously way more journalism on topics. And so obviously now we have to get into computational processing of all of that since the human can't process it anymore, but now it's, well, what are the algorithms? And we're doing that computational processing and currently the answer is things like newsfeed algorithms, Facebook and other newsfeed algorithms, at least deciding of all the stuff out there, what makes it into your feet because you're not going to see a million news articles in the day.

So, the ones that make it in are based on personalized data, harvesting about you, mostly focused on not trying to make you believe something, but what you actually interact with the most but the tricky thing is what you interact with the most is mostly based on one marshmallow stuff, right? Mostly based on like what actually makes you click with it because the title was so salacious and effective at making you click or so many of your friends already had or something like that. And so, that phenomenon means everybody doesn't see the same thing, because there's too much stuff everybody can't see the same thing and so what everybody sees is the stuff that is AI split test optimized and the direction of their twitchiness, right?

I think that is a term you use it. And that usually means in the direction of their confirmation bias, because people are less likely to click on a thing that uses words. They don't even know what they are, which means likely to expand their uncertainty set than things that are already primed to them as something bothersome. And so, this is not just a left right anymore thing it's a algorithmic basis of cult generation and then one of the really fucked up things. I don't know if you... Well, of course you saw, you pointed out to me, I had these deep fakes made about me, these audio deep fakes.

Jamie Wheal: Was it catabyte, was that the term?

Daniel Schmachtenberger: Catamite so there's a third one that just came up. I'm a biracial colonialist in this third one, but yeah.

Jamie Wheal: I'm large. I contain multitudes, I can have my contradictions, yeah.

Daniel Schmachtenberger: So yeah, catamite. Just to be clear, this means like a young male homosexual sex slave for older, powerful men, whatever. So there's this thing with my voice, talking to Lex Friedman about me being a catamite and reciting a poem about it and whatever. And of course, to be clear, this is an AI generated deep fake. I had nothing to do with it. I didn't authorize it. There's just enough of my voice in Lexi's voice. So you can make like this and this... So, the first step was the AI to curate all the content, which is the Facebook algorithm or the YouTube algorithm or whatever, or the Google algorithm. The next one is to say, "You don't even want when you do a Google search to see 8,000 result pages, right? Million result pages, you want to see a customized chunk of info for you that read all of those and calibrated it to you."

And that's what GPT-3 can do, right? That's what new advanced generative text AI can do is read that and say, in a thousand words, summarize X for me. And it can also do it in anybody's voice and pretty soon anybody's similar to it so now you imagine the auto calls were not only are they being attracted by what they're being attracted now, which is just, who all is attracted to the same emitted twitchy thing, but where you can actually produce bespoke content from bespoke content producers in that space. So you can actually create gods of the auto calls. That's all very weird.

Jamie Wheal: It's super weird. And also like I'm trying to think so yeah I think it was a GPT-3, an AI generating algorithm and somebody was already starting to use, I mean, a talking about race at the bottom and whatever that law is that like any tech online will be used to make porn first. It was the equivalent of that, but it was for info marketers. So there were info marketers who were already trying to explore using AI, cutting edge AI tech to generate better info marketing sales copy.

Daniel Schmachtenberger: Of course.

Jamie Wheal: Right. But like anybody who's seen that rash of Dolly, everybody in their mother posting on their social fees. I said, "Here are my five ingredients and here's my piece of art, right." But like from AI generation To speak of like the auto calls and even creating the deities, right. You could not only do it in the name of like give me a summation of quantum physics and as Stephen Hawkings or Richard Feinman would say it, right? But you could even do it as Jesus, as Buddha, as Lao Tzu, right? You could actually literally give the AI the coding to give voice to our divine archetypes which to me seems super fascinating, but also wildly spooky.

Daniel Schmachtenberger: Okay. So totally spooky, will get abused. There's an important principle about tech that is related to the thing you said about it'll be used for porn first or then internet marketing, which is any new technology enables new things, right? That's what technology does is enable human seduce that they weren't able to do. It will be used for all the things where there is incentive to use it and where there is more incentive and return on that incentive it'll get used the most there. So whenever somebody's developing tech, they might be developing it for a certain purpose that seems really lovely. But what they have to realize is it's going to be used for every purpose that is not regulated, that there is incentive to use it for. And it will be differential used the most by the ones that have the most incentive to use it.

And so, when we're developing new tech, we have to take that really fucking seriously when we're dealing with tech that makes changes to the world as quickly as powerfully and that scale and as irreversibly as the new tech does. So you develop CRISPR, meaning you develop some gene editing technology where you have ethical review boards that you have to go through with lots of good scientists on them to assume this is to ensure this is a good thing to do in a bio secure laboratory, but then you publish your paper and you're doing it for immuno-oncology, right? To figure out how to edit genes to cure cancer.

But as soon as you for that purpose, with that money and that major institution figure out, "Oh, here's how we splice the gene to do that and you publish it." Now, anybody with really fucking basic hardware and without ethical review boards can do that for any purpose. They have a purpose for you make AI and make it open source and you can use the AI for any purpose to optimize computational processing of lots of stuff. I won't even talk about what they could be, but people it's not that hard to think about. So, this I believe that we are moving into a world where and this will be a radically controversial thing to say.

Jamie Wheal: Good.

Daniel Schmachtenberger: The reason why we don't like the term regulation is because we don't trust this government or a lot of people don't trust this government or any government structure. The idea that anything that is powerful enough to regulate will become corrupt. All the kind of libertarian public choice, theory, critiques. Those are all good and those are all right. I don't trust it for the same reason, but I also don't trust open market proliferation of exponential catastrophe tech and with no regulation where of course, if we're talking about drones or cyber or biotech, and specifically since bio where like the ability to synthesize genomes, corresponding with the best institutions in the world, putting the pandemic grade genomes that they're figuring out online through publishing and the same with AI, the world just doesn't make it through that, right? The world doesn't make it through decentralized catastrophe weapons with no oversight. And you can't regulate after the fact the way we always have like after DDT killed so much stuff, finally, we figured out how to outlaw it.

Jamie Wheal: Yeah. I mean Round Up right took 30 plus years and we're still sorting out for every chemicals.

Daniel Schmachtenberger: And lead in gasoline and all those things, right? Lead and gasoline is estimated to have taken a billion IQ points off the world and made us 4x more violent holistically to just stop engine knocking and that was-

Jamie Wheal: 4x?

Daniel Schmachtenberger: Yeah. More violent. Yeah. The real politic assessment. I'll send you a link on it. There's an ex incredible explainer piece somebody did on lead and gasoline, as an example of externality to fucking stop engine knocking, we take something that you have to mind that would never occur in the biosphere and figure out how to burn it and aerosolize it. So, that it's in everybody's breathing air that makes us both dumber and meaner. And so, then we get people doing social science and doing this real politic assessment of humans are too dumb and mean for things like democracy to work or things like nice systems to work. It's just genetics where nasty chimps, it's not fucking nasty chimps and genetics. It's like shit that we did that we could do differently and if we-

Jamie Wheal: Toxins in the brain piece.

Daniel Schmachtenberger: That assessment, the four X more violent and a billion IQ points off, literally dumber and meaner, was one chemical, one chemical. We have 15 million chemicals in the chemical database of the American Journal of Chemical whatever it is.

And how many of the other ones also have negative behavioral effects, let alone on the other side of the fucked up top soil that doesn't have the trace minerals that were needed for healthy neurotransmitters and everything else? So the whole let's study humans under ubiquitous conditioning and then call it nature and then say, "We're too fucked because of ubiquitous conditioning," no. So I have massive critiques of "humans are this way" studies where we studied humans that were all conditioned by the same things and then took it as inexorable. That was a tangent.

Jamie Wheal: Well, I mean, it-

Daniel Schmachtenberger: I remember, wait, the controversial thing I was going to say. New categories of tech have to be pre-regulated.
So you can't put lead out because the market just figured out how to add it and then only after enough people die and people figure out how to measure invisible stuff and the policy groups go against the financial vested interests that are already making billions of dollars on this, you finally regulate it in one area, but it's hard because other areas are using it, and nobody wants to be the first one to incur the cost of the regulation. We don't get to do that with AI.|

By the time it's causing that much harm, we're all dead. We don't-

Jamie Wheal: Or any other exponential existential tech. There's no, "Whoops," do overs.

Daniel Schmachtenberger: Correct, and you also can't do national regulation, because all these things create global effects very quickly. It's not like we can say, "Oh, we're going to stop doing gain of function research in the U.S., and that's going to matter."

The global pandemic is because of global supply chains. Either you get everybody to do them, or it doesn't matter that much.

Same with AI weapons. So now what I'm saying is, in the face of the amount of power of exponential tech, you can't let it out unregulated first, because either some people want to do fucked up stuff with it, or simply the accidents that happen.

And you can't regulate it at a national level, so you do have to have something like global regulation that does safety analysis on the tech to say, "Okay, these applications, not all applications, these applications done in this way are a safe way to put it out," and we continue to watch and iterate to change regulation and change tech design.

I'm saying we don't make it through if we don't have something like that, and yet nobody has proposed any system of governance we would trust with that. That would be some centralizing power dystopia, so-

Jamie Wheal: So it would be like SALT treaties or some other nuclear non-proliferation-

Daniel Schmachtenberger: Yeah, IAEA, exactly.

Jamie Wheal: ... agreements, but boom, across a whole additional swath of stuff.

Daniel Schmachtenberger: And stuff that moves way faster. Nukes move really slow, and so you don't have to do as much because everybody can't make nukes just when somebody publishes because the physical tech is so hard, but that's not true for [inaudible 00:35:59], and the same with everybody becoming conspiratorial or standard model bias.

There is no way for us to make this much sense of stuff. We do have to come up with institutions that can process this that are both lots of humans and computational capacity that we trust and that we have a reason to trust, meaning the kinds of audits and transparency and everything to avoid corruption and embedded mistakes have to happen. So this, I think, is a really critical thing to realize.

The Need for Collective Intelligence to Solve Existential Issues

Jamie Wheal: Okay, that's one that has baffled me. I mean, I even wrote a bit on it a month or two ago, which was just what I've noticed is that basically, because what you're describing is some version of a benevolent new world order. You're saying we do need global coordination for global existential issues.
We need to have authorities that have the capacity for enforcement and policy making. And the thing that's baffled me, again, back to what's happened in the two years of quarantine and that kind of thing, is that this is not just ultra far right libertarian John Burgess afraid of the Council on Foreign Relations and the club of Rome and all these big time baddies and watching for these things.

It's now on the left hand side. It's on the spiritual psychedelic transformational cryptocurrency side.

So everybody, even nominal progressive do-gooders, are now completely freaked out about any form of coordinated transnational governance and are now seeing that as signs of the beginning of the end, the move to the Panopticon surveillance state lockdown. And so now, even people who would normally follow everything you just said and agree and be rooting for benign authority to do something on our collective behalf are actually now so suspicious and resistant to any signals of transnational coordination that they're going to be, at minimum, passive resistance all the way to potentially violent uprising.

Daniel Schmachtenberger: But they're right, and so this is why I talk about catastrophes and dystopias as the two things we have to simultaneously prevent and find a third attractor that's neither, because if you try to avoid too much centralization, so you use things like markets or the separation of lots of countries or whatever, you end up getting these kind of multi-polar trap race to the bottom things.

They make the fucked up AI weapons, so we have to also make the fucked up AI weapons so we don't lose. They produce the carbon, so we have to.

Nobody can stop climate change. So we get catastrophes from that model.

In order to avoid those catastrophes, you can't just use incentives. You have to be able to use deterrents, because somebody will figure out how to have enough incentive to do whatever the fucked up thing is.

And if they get so much advantage out of doing it that then everybody else has to do it or lose, then you have to have a deterrent that says, "No, nobody gets to make bio weapons. If you start making bio weapons, we're going to stop that forcibly. We're going to do something to stop that," whether that's tariffs or war or whatever it is.

So then who implements that? To have enough control to prevent the decentralized catastrophe processes looks like enough control where you're like, "Wow, to have the power to do that looks like an uncheckable power," and the history of uncheckable powers is not pretty.

If people are concerned about excessively concentrated power, that just means they're good students of history because there is nothing in history that tells us that's a safe idea. And obviously, there are lots of times where a very malevolent dictator pretends to be a benevolent dictator.

That happens all the time. But even if you have a more benevolent dictator, you get the rules for rulers dynamic, which is the equivalent of a multi-polar trap on the other side.

The rules for rulers says, if there's that much concentrated power, the more power seeking people want it, and so you've now centralized a point of capture of the system, even if you set it up good at first. This is why it's like the Republicans and the Democrats, it's a common thing that when they have their guy in, they want to increase the power of the executive branch.

And then they get really upset that they increase their power of the executive branch when the other guy gets in, because fuck, your guy doesn't stay in, and you increase capture possibility. So anytime you have a system that has a lot of centralized power, how do you prevent capture, and how do you prevent corruption becomes the core questions.

How do you build that in, given that we've never had good answers to that? Because the more malevolent people are willing to do fucked up stuff to get in that position, the benevolent people can't stay totally benevolent and adequately deal with the malevolent ones under most situations.

So they either lose and it gets captured, or they become capable of winning, which makes them less benevolent, i.e., even the good kings putting heads on stakes to make sure that people knew not to do that thing because it was better for the kingdom as a whole, population-centric wide kind of punishment strategies.

So we have to solve the catastrophes, but we have to do it where the control mechanisms are uncapturable and incorruptible. That requires new shit.

It's not democracy as we have understood it. It's not a market.

It's not monarchies. It's not communism. It's actually some better processes of governance that the same info tech that is messing the world up enables if we do it rightly.

Jamie Wheal: Well, so now you're getting into the post democracy question, and back to [inaudible 00:41:39] and what people [inaudible 00:41:40] can consider these days, and so you were riffing on a lot of the multipolar traps, the race to the bottom. If someone's going to be a sociopath, if I have basically a non-zero suspicion that there is a sociopath amongst us, then I might as well do the sociopathic thing just to prevent the other person from doing it and then having us all with our pants down.

So therefore, we can get sociopathic behavior, even if all of us were actually otherwise decent people, but just not in a hundred percent trusting environment, and that sounds like the Slate Star Codex famous piece Meditations On Moloch. It was very rudimentary game theory as he went through all those things.

It's far better for a bunch of countries to sign on to anti-pollution treaties but keep polluting, because what they want to do is actually hamstring everybody else to get an advantage, all those things. So for people that are listening along at home, the Slate Star Codex Meditation On Moloch was one of those kind of central pieces a few years ago.

This also brings up, I think, is it Mencius Goldbug? That's his nom de plume.

He's here in Austin now, but he's been arguing fairly boldly and provocatively. He's kind of one of the Peter Thiel backed disruptors, chaos agents, but he's arguing for some form of return to monarchy, AKA Plato's philosopher king, AKA Russo's enlightened despot.

So given the risk assessments and the coordination challenges that you're describing, are we at the end of a very brief, really fairly wobbly 300-year experiment in representative democracy, and are we moving towards, as Viktor Orbán and many others are starting to kind of signpost, a post-democratic much more centrally controlled forms of form of governance? And is that an innovation or an aberration from where you believe we ought to be going with this?

How Do We Reinvent Democracy?

Daniel Schmachtenberger: To get into what I think the social system adequate to safely steward the power of exponential tech must be, there might be many different specific proposals, but what the criteria that must be in place to do that are and to do it in a way that are also not highly central controlled dystopias, it's beyond the scope of what we can do right now because there's a lot of other ways that the things fail and a lot of other criteria to make them work we'd have to build. But I'll speak to the democracy as a very temporary thing.

There's a lot of different smart assessments about why democracy is a short, little blip or as we understood it, and obviously that occurred because of some fairly unique things happening in the change of mostly infrastructure. Obviously, the first one that comes up in the kind of McLuhan in line of the way media tech changes stuff is the printing press enabled people to overthrow feudalism because you could start having a newspaper and textbooks, as opposed to the hand copying of information where you had to be very wealthy to have access to information.

And nobody thinks the idea of people being engaged in governance who have no idea what's going on makes sense. So the foundation-

Jamie Wheal: That was a founding father's kind of question, right? It was the illiterate mob [inaudible 00:45:08] dangerous.

Daniel Schmachtenberger: Yeah. It's a big part of why they chose, amongst many things, representative democracy rather than direct democracy. But the founding fathers were coming out of modernities enlightenment.

And the idea that the two prerequisite institutions for democracy to have occur were a very high quality of public education and a fourth estate or an independent news and media so everybody could be educated enough to understand all of the issues, and they could ongoingly be informed enough that they could participate in collective choice making. And of course, in 1776, the complexity of the issues was way less.

You could actually be an expert in most everything technical at the time. You could understand the issues of agriculture and infrastructure and whatever, if you were a kind of Renaissance era enlightenment person.

Jamie Wheal: The Ben Franklins and the Thomas Jeffersons of the world.

Daniel Schmachtenberger:  Right, and that was why they wanted a representative democracy. They're like, "It's too high a burden for the average people to study all of that, but some people can, and the average people can study enough to be able to see who understands the issues better and pick them to represent them."

Obviously, now we're in a situation where the average people can't actually tell good thinking from populism, and the representatives not only couldn't be an expert in everything; mostly, they're not experts in anything or even particularly good thinkers.

They're people who are good at winning those types of political games. So the whole basis of that thing's eroded, but there's definitely the idea of... You've seen those things where they go around and interview Americans to show how sad it is about how dumb Americans are.

I'm sure the people who said the right answers didn't make it into the editing of the clip, so it's, again, trying to show us a strawman version. But do we see a lot of young people who think Africa is a country and can't name previous presidents and all like that? Yes.

There's a lot of that. So obviously, our public education is not doing what it would need to for something like democracy to make any sense at all, because what should the average person's... How much do we want to factor their thoughts on international relations if they think Africa is a country?

Jamie Wheal: Well, that's always bungled me as all these new shows have rushed to be hip and online and who's tweeting what, and then they'll take social media things, and they'll be polling people on, "Do you think Syria has weapons of mass destruction?" How the fuck would you know?

Why are you polling a body politic that is absolutely has no access to the relevant information to ask what they think? And somehow, considering that up to date newsmaking is baffling.

Daniel Schmachtenberger:  Okay, you're actually saying something else really important. Not only is it clear that the people aren't getting educated enough for base things like geography and history, not just even news and classified information. So the education's not there.

The news and their relationship to it is not there. And as we mentioned, rather than a newspaper where everybody reads the same thing, you have a social media feed now where everybody reads a personalized to them, optimized for engagement, which is usually more limbic than good thinking modes.

So the social media thing isn't just the end of the fourth estate; it's an anti-fourth estate. It exactly does the opposite effect of what the fourth estate was supposed to do, which is rather than give everyone a unifying basis of information to go to a town hall on, it gives everyone certainty on stuff that is exactly opposite of what the other people have certainty on and strawman, villainized versions where the social contract is broken.

And if your greatest enemies are your other citizens of your same nation in a democracy, that's the end of the democracy.  So the McLuhan-esque scholars say, like the guy at Center for Digital Life, that 00:49:04 democracy is a function of the written word being the dominant method of media and that actually, TV was the beginning of the end of it, and internet was completely the end of it, because when the written word was the dominant method, not only did you have everybody could have textbooks, and you had the enabling side and newspapers, but you also had people had to read long form to get any information.

And they had to know how to write. If they're writing letters and sending them via mail, i.e., the development of all of the modern democracies we have in that time meant you had to be able to linearize your own thought.

You had to be able to hold kind of longer thoughts, read books, things that didn't have hyper normal stimuli. It's a very interesting assessment that democracy, as we understood it, was the result of a particular way of info processing that was mediated by a certain info processing technology, without which that thing won't work anymore.

You can't do short attention spans and democracy because democracy requires being able to hold many people's opinions and lots of conflicting information, and so that's one. There's three others, but I can see you thinking on that one that I was going to say of what was unique to the founding time of democracy that isn't true anymore where the way we think about it moving forward has to update itself. 00:50:20

Jamie Wheal: Well, yes, and just to give us a little bit of elbow room on that, because my sense is you're effectively idealizing or strong-manning a sort of age of the early Republic where all citizens and voters were sort of extended Renaissance, well literate, competent thinkers and some version of Renaissance men, and at that time, just men. But I mean, being an American historian myself, I mean, those early republics, it was a fucking bare knuckles brawl.

And they would host keggers and get all the people wasted, and they'd throw barbecues and cookouts. "Now put your vote in for our buddy who just sponsored this."

There was all kinds of thumbs on the scale. There was the Federalist papers.

There was all kinds of, quote unquote, fake news and diatribes between Madison and Mason and those guys in back and forth, so I just think it is important to understand that democracy has always been a wild, chaotic, hopelessly messy process versus some idealized origin state that we have fallen from grace and therefore can never get back to.

To me, that almost gives us more resiliency now, being like, "Oh, these are the mutations. These are variations on the theme, but there's continuity versus a fall from grace," as far as the narrative arc.

Daniel Schmachtenberger:  I don't think that what I'm saying is that it was that perfected thing. I'm think what I'm saying is that the narrative of how democracy should work that was being developed at that time was partly for political expediency and partly based on real political philosophy and earnest thinking.

And it's usually always a mix of both, because memes that are just based on earnest truth with no motive usually don't catch on that well because no one's that motivated to push them. And so we wrote this article in [inaudible 00:52:12] Project called "Where Arguments Come From" on the kind of think tank research industrial complex of, if there's an idea that is trending, there's probably somebody that cares about it that is making it trend that funded all that research and did all of the ongoing publishing and that that's not connected to what is simply the best idea anyone's ever had in the space.

And there's quite a few in there that we did that are relevant to these topics. The one on how to mislead with facts is another relevant one on the people who get caught into biases that are not the conspiratorial ones but the standard model ones and think that it's all fact checked.

But so I'm just trying to take the earnest part of the philosophy of democracy that was occurring and say that was the philosophy upon which the political thinkers were saying, "This thing might work," is we could give everyone a good quality of education. We could give them all news.

And we could at least have a representative class that was very well educated on the topics and the smallness of scale, so the printing press, the education thing, and things like public education so that that was being insured for everyone, which wasn't a thing under most of feudalism. And-

Jamie Wheal: And tripartite government term limits. There was all sorts of these fairly ingeniously constructed checks and balances effectively to nudge towards some expression of the infinite game.

The intention isn't definitive win and loss but to keep playing. I mean, to me, the architecture, whether or not it's still fit for purpose, secondary question, but I mean, it does feel like the architecture was surprisingly thoughtful.

Checks and Balances on Power to Prevent Centralized Corruption

Daniel Schmachtenberger:  I think [inaudible 00:53:48] principles of the architecture that actually are still fit for purpose, so a different instantiation, a different way that they play out. So I don't know if you've had Alexander Bard on.

He's an interesting thinker in these ways, and he has an interesting analysis of the three branches of government in that most every system of governance that was stable long term had three things that were somewhat similar, structurally similar to a judicial, an executive, and a legislative branch. So I think things like separations of power are trying to get to something deeper, which is avoiding centralized failure modes, either from corruption or bias and blindness.

And that doesn't mean we have to do it through separations of power. We could do it through something like computational methods that allowed the transparency on the information provenance through the whole system and insurance that all of the information was being processed with all of the relevant epistemologies, but that wasn't possible then, so simply checks and balances on power was how you prevented centralized capture corruption or failure.

Jamie Wheal: Say that last sentence again.

Daniel Schmachtenberger: Separating power into these various branches was ways of preventing centralized failure modes, and they didn't just do three branches-

Jamie Wheal: Like the mad king idea, right? Just somebody going totally off the rails and having unchecked power.

Daniel Schmachtenberger: Or even the emperor has no clothes situation where it's not that they're [inaudible 00:55:24], and they have really, really nasty, but where they are in a distortion bubble where they have a bunch of yes men in a court around them. They just can't tell what's true, and the thing is failing for those reasons.

So the idea of a centralized system can have bad intent at the top. It can have mistakes at the top.

It can have modes of capture at the top for others who have bad intent who aren't currently there, so let's just keep it from being centralized so there's some type of stability, and the two alone can't do that, which is why the two party system gets so fucked. It ends up just driving polarization.

So you have to be able to have something like this three body problem of having them check each other and that they do different things. It makes sense that the judicial branch is optimized for wisdom that can adjudicate where the letter and the spirit of the law are different, not optimized for everybody has a say.

The creation of new law is optimized to go slowly representing everybody, and the ability to execute really quick shit that has to be quick has limits put in place, but there's a branch for it. All that was smart.

Jamie Wheal: But I mean, I would make the case that the thing that Trump did was he broke tripartite government. So he had an attorney general, and he just replaced attorney generals until they did his bidding, although both of his choices were more than happy to.

The legislative branch went fully in the tank, so instead of the Senate bowing up on him and being like, "You are the RINO, bro, and we are going to let you know that what this august institution will or won't do... " Now, this gets back into information ecology and doing an end run around public opinion via Twitter and a bunch of other things, and the judiciary, which, while people were rage tweeting about Trump's latest inflammatory thing, he and McConnell were packing the courts top to bottom.

They posted more court appointments in three years than Obama did in eight. And now, Roe v. Wade and all these other things, including McConnell's utterly cynical stalling tactics of appointments and this and that, now we don't have a tripartite government, and to me, that was the most concerning thing, watching 2016 to 2019.

I was like, "Oh, shit, this is now a decades-long hangover." This is not swapping out the guy at the top every four years, and yay, we get our guy back in or anything like that. This is now a systemic rupture of any of the checks and balances that you just laid out in the theoretical model.

Daniel Schmachtenberger:  Well, so we're saying two different things, and I think they're both true and important. We're saying that even if the previous system was well intact, it would be insufficient for various reasons.

And we didn't start to lay out all those reasons, but it would be insufficient. We can say things like, we don't have ways of ensuring that the representatives are representing the people because we don't have everybody going to town halls and because of smallness of scale type issues, and the fact that you would regulate after the fact, because of the speed of tech being so slow, doesn't work anymore.

So there's a few things where it's like, even if that system was intact, it's no longer an adequate system, or the thing like it needed a fourth estate to actually function well, and it doesn't. The other thing is that system isn't even intact.

That system has eroded profoundly, and so both of those things are simultaneously true. Because it's eroded so profoundly, we know some major changes to governance are necessary, and even if it hadn't, it's no longer fit for purpose in the exact way it implements itself, even if some of the principles are right, thinking about, "Well, how would we rebuild now something that is aligned with the right kind of principles that both create the kinds of regulations that are important without the kinds of centralizations of power that are really problematic?"

And I don't think it's an unreasonable thought for anybody to say even the most founding father staunch person doesn't think the founding fathers would've built it the same way if they built it today, exactly those guys brought here. Printing press wasn't the exciting information tech.

And given the total amount of information to process and given all of the changes, of course this current system would have a much deeper digital interface. Everybody can't fit in a town hall anymore, but everybody can fit in the internet to be able to directly input on things.

And we can make identity as secure as we have for online banking easily. That's already secure to be able to do a lot of things differently, so the idea that we have to and can start to restructure the thing very deeply I think is a pretty reasonable thought.

Jamie Wheal: Okay, well, there's a lot there. And in fact, that fellow that I had just mentioned who's in DC politics who just came to our recent event was talking about parallel democracy movements and the idea of people starting to sort of spin up alternate ways of governance. And you've been working with digital and direct democracy efforts.

You've referenced the Taiwanese expressions, which is increasingly, acutely relevant right now. I feel like you've been hinting at a model.

Do you want to just play this through and kind of lay out what you think would be optimal? I'm imagining there's some version of blockchain validations and security, an anonymity underneath it, or not.

Again, banks don't use it, so we don't have to go that way. But what would your best take be in a quick sketch of some version of digital democracy, and then how on earth would you pry the wheel away from the entrenched and potentially corrupt or captured powers and interests that are running the old system?

Daniel Schmachtenberger:  It's tricky. If we were to say, how do you design an ideal system from scratch? Even if you come up with a good design... obviously it'll probably be wrong, but let's even say we came up with a really good design... how do you do enactment? And this is where a lot of people like the people in Thiel's circle become libertarian.

They're like, "Your utopian ideals take so much violence to enact in the presence of all of the vested interests that don't want them. Is it still a utopia after you committed the amount of violence needed to enact it?"

Jamie Wheal: That's the Che Guevara paradox, right?

Daniel Schmachtenberger:  Yeah, exactly.

Jamie Wheal: I mean, he starts out as a bleeding heart do-gooder doctor on a motorcycle and ends up capping people in the jungle.

Daniel Schmachtenberger:  For a reason that we can empathize with, the process that took him in that direction, but empathizing with doesn't mean want to repeat. So the other direction is not design a perfect system and then be stuck with enactment problems.

The other way is, what is intelligible to the current system that could vector it in the right direction, and can those shifts converge to a new, adequate thing? The forward and reverse engineering approach, and I like to do both of those and have them inform each other, because I do think there are certain risks that are clear and imminent enough that they require acting and where the forcing function of them can make certain action occur.

They can also move us in a fundamentally different direction. I'm biting my tongue because it will take a lot to construct some of these things.

Let me construct one other thing first. When I was saying that at the time of the U.S. democracy and the other kind of modern European democracies were roughly in a similar-ish time, but let's take the U.S. democracy. Obviously, the critique is also pretty clear that you can't separate the government and the economic system.

Those things are embedded, especially when it's something like liberal democracy that's trying to make the market do most things. It did depend upon genocide and slavery.

There was not a political economy that worked without the most gruesome things. And then we can only kind of say slavery ended when you actually really look at the history of the peonage indentured servitude system following and like that.

But also, it was able to kind of start to end because of fossil fuel slaves because of the Industrial Revolution. And then this gets us into the Nate stuff we were talking about earlier, which was, "Oh, now rather than extract from another race, we can extract from nature radically faster than is possible to produce enough abundance that this kind of system works."

This is another one of the ideas of why this is a short term one of w`hat Nate calls the carbon spike, a very short period of time where the abundance went way up from anything it had ever been, where you can have people happier with whatever the system is because you're creating so much distributed abundance of certain kinds that everyone feels like, "Well, at least it keeps getting better every year," but in a way that is fundamentally not continuous and actually self-terminating.

Jamie Wheal: Well, I mean, and that's worth talking about because all modern political theory, economic theory, I mean, Freedman, Hayek, the Austrians, you name it, all the way to Keynes and the new economic, whatever the fuck, quantitative easing, just print more money, because Keynes was basically like, "Hey, you can't ever print too much money, provided you can deliver on the investments," but there was that huge key asterisk of provided you can deliver on the investment, and so-

Daniel Schmachtenberger:  And no ecological consideration on that.

Jamie Wheal: But all of that was within that carbon pulse, so it was basically like we were trying to do sleep and physiology tests when everybody was given an eight ball to go home for the weekend, and you're like, "Okay, none of this is actually- "

Daniel Schmachtenberger: Or human nature test on ubiquitous lead.

Jamie Wheal: So one of the things that Nate has said that I think is really memorable, and when you're talking about slavery, because there is a strong argument... Yale historian Edmund Morgan made it in American Slavery, American Freedom.

He's like, "Hey, slavery is a central prerequisite for attempting the experiment of democracy." It happened in Greece.

It happened in Rome. It happened in the United States. It's not until you get to the mid 19th century with the advent of coal and steam power-

Daniel Schmachtenberger:  Slavery and genocide. It wasn't an empty land. You had to kill all the people that were here.

Jamie Wheal: And then massively extract all the crazy abundant natural resources, which were one time bonanzas also, but his point is that right now, today, we are enjoying, via the fossil fuel bump, the equivalent labor of 500 billion humans to support eight billion, or really, more realistically, to support four or five billion max, and that-

We can support four or 5 billion max. That 500 billion, effectively, units of manual labor or enslavement, we're borrowing from our great grandchildren. So we're effectively enslaving the unborn to enjoy our current standards of living.

Daniel Schmachtenberger:  Everyone might not be clear on that. The fossil fuel slaves you're talking about are that-

Jamie Wheal: Caloric units of barrels of oil.

Daniel Schmachtenberger: Yeah, that are doing work that would've been human work.

Jamie Wheal: Yeah.

Daniel Schmachtenberger: If you were to replace all the things that are running on hydrocarbons, the automation that's already occurred, with human activity, it would take about 500 billion humans to run the current materials economy.

Jamie Wheal: Yeah. What comes to mind is friends, if you've got any friends that have ever posted up in Asia. They live like rajas, in Katmandu or India or Thailand or wherever. They've got maids and they've got gardeners and they've got all these things. You're like, we're all kind of doing that.

Daniel Schmachtenberger: Yeah. There's something called the global slavery calculator. I don't know if you've used it.

Jamie Wheal: Yeah.

Daniel Schmachtenberger: You basically plug in your lifestyle details. It's not looking at carbon slaves. It's actually looking at humans that are in indentured servitude conditions, that are making your iPhones and that are mining the cobalt for the battery for your iPhone.

And basically saying, based on your lifestyle, how many slaves. I think 27 on average, for an average American. Meaning that many, not just the fossil fuel slaves, but actual in indentured servitude, equal types of positions required to make the supply chain that your life depends upon.

Jamie Wheal: Yeah.

Daniel Schmachtenberger:  That is going to break. That thing doesn't get to keep happening that way, for a lot of reasons. The reckoning of that is really significant.

Jamie Wheal: Okay. Well, this now brings us back to where we put a pin in the map, with that initial thread. So we kind of jumped into sense-making, meaning-making, post-COVID, information ecology, et cetera.

The World’s Energy Catastrophe

Jamie Wheal: Now let's come back to Nate Hagens, our friend and colleague on energy blindness, is his general take. Which is, hey, you've got to include, where are we pulling all this juice from to create the substrate of the society we live on? Of course, what are the implications? As we appear to be getting to peak cub and slash the downhill slide on the other side.

If you ask me, as I'm trying to run the numbers... I'd love your input. I don't know if you saw any of the stuff on Sri Lanka, which has recently gone through political instability/collapse.

Bari Weiss did a kind of snarky, this is the problem with woke ESG goals. This is what goes wrong.

There's definitely a whole information warfare on woke ESG, like the coupling those two things together.

I think her critique was superficial and snarky, because some additional energy scholars that I was tracking was like, hey, Sri Lanka was a more or less subsistence economy, dirt poor, of about 7 million people. They got the World Bank, IMF, double whammy. Let's give you debt and investment.

Then you stop growing all your food. You start exporting more of that kick-ass tea you guys make. We'll give you more return on your cash crop, to then go and buy and import the foods you actually need.

The population ramped from seven to 21 million, and then they were trying to get off pesticide. I mean, this is a gross simplification, but this is more or less as I tracked it.

Lately, they've made a couple of efforts to get off high petrochemical, high nitrogen, NPK fertilizer, turbo industrial agriculture. That then has created a crash, which was also mirrored in Holland, in the Netherlands, with the Dutch farmers protesting lately and driving tractors, kind of like our truckers' convoys.

All these things happening, but now you're at a situation where Sri Lanka as a government, as a people, are potentially trying to do the right thing. They're trying to get to more sustainable farming, but they're already addicted to the heroin of the global market economy and dependent upon it.

Therefore, to get off it is going to create incredible suffering. Most estimates were that the natural, whatever that would mean, carrying capacity of their land mass is about 7 million people.

So, they're looking at a 14 million person catastrophe, even if they were to try to get back to anything resembling for a sustainable population. But now, isn't that just us at 8 billion people?

I mean, did you read Peter Zion's book, The End World's Just the Beginning? If you take Zion as one thumb, if you take Nate as another, throw in Vaclav Smil and any anybody else doing this big meta-systemic energy-informed analysis, it feels like to me, we've got two choices.

The Kuznets Curve of greater technological advancement creating decreased pollution over time. It's a question mark. I don't think that is established cannon, by any stretch.

The papers I've seen have said that, hey, that was true for Western Europe and OECD countries. It's not proving out when you test the Asian tigers, when you test a bunch of other post-1950s moves in that direction.

Nonetheless, it's either we find a way to successfully navigate off the top of this curve, of the carbon poles onto something sustainable and renewable, with all the caveats and asterisks about what sustainable and renewable actually could or does mean, with lithium ion with, with windmills, with all the high embedded energy shit we currently call renewable.

Can we do that without a total disruption to our sociopolitical economic structures? Can we do it in time to avoid an utter population crash and the erosion of our capacity to innovate further?
Einstein's thing of, if the third world war is fought with nukes, the fourth will be with sticks and stones. Are we not heading to that bifurcation?

Because to me, that seems like the monster bifurcation. You've got surveillance states and catastrophe and a third attractor, which seem beautiful.

But to me, even before that, even more foundational is, either we just hop off one sinking iceberg to land on the next, as far as the foundations of our energy economies, or we suffer a population collapse of truly horrifying proportions.

Because if you look at all the population curves, they all hit the hockey stick mid-19th century. We're at a billion people at 1800. We're at 8 billion. It's doubled in mine in your lifetime.

So surely, you take away NPK fertilizers, you take away genetically modified crops. You take away over-tapped groundwater, which is an... I think Nate even used the term fossil aquifers, I think just to indicate non-renewable, millions of years to recharge kind of thing. Are we not looking at a Malthusian horror show when and if we simply run out of oil?

Daniel Schmachtenberger:  Yeah. I think oil is a really good center point. There are a few other ones that are not reducible to oil.

When you were mentioning NPK and Sri Lanka's issue there and what that did to population, obviously NPK started with N, just Haber-Bosch method and nitrogen fertilizer and we went-

Jamie Wheal: Which was rocket engine stuff for the Nazis, for anybody playing along at home. That's wild military industrial complex shit.

Daniel Schmachtenberger: I mean, the green revolution agriculturally, in that way, was as deep a part of the thing we can think of as the beginning of Industrial Revolution, as using hydrocarbon energy.

It was, use hydrocarbon energy and make soil radically more productive in areas that it wasn't productive and productive for a very narrow set of purposes, meaning caloric consumption to grow and sustain populations.

So we went from about a half a billion people at the beginning of that curve for the... however, three million years since the early things we'd call humans and 100,000 years of current ones, ever to get to half a billion people. And then half a billion to 8 billion, largely as the result of being able to extract nutrients from soil faster than they could replenish themselves.

All that nitrogen of course, also equals nitrogen effluent, as it flushes out, which equals dead zones in the ocean, which is very close to a catalytic tipping point level of being able to make life inhospitable for any large number of people.

So what we're looking at is, our economics has oriented to have everybody oriented to benefit themselves, where they will extract unrenewably and they will pollute unrenewably, maximize extraction and externality.

The entire market system is oriented around maximizing extraction and externality, maximizing profit. Profit is an extractive measure, revenue minus expense, how much you get out of the system, more than you put into this system.

It's supposed to be a measure of productivity. How much did you add, in terms of the value of putting those parts together?

Of course, that's not factoring that nature has a balance sheet. So, it's very important to understand that profit is as much a measure of extraction as it is of production. We have to think about that more realistically.

Jamie Wheal: It's like the Tolstoy quip in Anna Karenina, where he says, "Behind every great fortune lies a theft."

Daniel Schmachtenberger: Yeah.

Jamie Wheal: If you writ large across our civilization, you could make the case that behind our abundance, simply lies just externality.

Daniel Schmachtenberger: The Georgia Guidestones, I don't know what the fuck happened. They were blown up recently. But you remember, they said, maintain a population of half a billion. That's the pre-Haber-Bosch level of the…

That was the pre-industrial agriculture level of global population. So, some people look at it and say, how many people can the earth support sustainably? Well, the most we empirically know is maybe around that number.

Now, of course, technology can create efficiencies that could make that number higher, but does it make it 8 billion? Does it make it the projected numbers we're going to?

Those are very important real questions, but let's also look at other fundamental stuff. So we're looking at energy, but then the energy's mostly... now moving some bits around in the digital world, but mostly moving atoms around in the physical world.

It's important to understand that where everybody's focused on, we can keep growing GDP forever without hurting the earth because it'll all be digital value now, that's a nonsense idea.

Because the digital infrastructure runs on a physical infrastructure, you need to do a lot of rare earth mining to make all of the computer chips. You need to use a lot of energy to run the server farms and all that kind of stuff.

Ultimately, people have a finite amount of time and attention to engage in content online. So, you do end up getting a diminishing return on, how much of the economy can be purely digital relative to the bits part, relative to the energy and atoms part.

Jamie Wheal: Yeah, we all got to eat. We live in these meat suits and they are in 3D, irreducibly.

Daniel Schmachtenberger:  So in the same way that we have an issue on energy, we also have a fundamental issue of atoms, is that all of the atoms in the materials economy are coming from unrenewable extraction and creating pollution and waste unrenewably, on the other side.

So to close those atomic loops, where all the new stuff comes from old stuff and none of the old stuff turns into unrenewable waste or pollution and that the energy required to run that closed loop cycle is sustainable energy, those are the beginning requirements for the materials economy of how we live in relationship with the planet.

Of course, the closed loop means not just the minerals needed to make stuff, but also the process of that acquisition is not destroying the species extinction and biodiversity loss and the biosphere that we inhabit.

When you are energy informed or you're biodiversity informed or you're closed loop materials economy, pollution, et cetera, informed those all look bad. Those all look bad.

Jamie Wheal: I mean, I did just read somewhere that where only... someone was talking about closed loop renewables in battery recycling, to just electrify all of our transport.

They were saying, we're only at something like a quarter of the extracted and converted stock of batteries to be able to be in that. So, we still need to do three to 4X more mining, before we even have the stock to engage in indefinite recycling.

Daniel Schmachtenberger: That's that the demand amount of today, not at the demand amount of three to four years from now, on an exponential growth curve up.

Jamie Wheal: Okay. Well, so talk to me about, one of the... Well, one of the things-

Daniel Schmachtenberger:  It's worse than that.

Jamie Wheal: Yeah, yeah. One of the things that Peter Zion, for anybody following along that book, The End of the World is just the Beginning, which came out couple of months ago, kind of hit public saturation.

One of his major tubs to thump is demographics. What is happening to the aging of populations everywhere around the world, in particular hardest hit, probably China.

Basically his thesis is, the faster you modernize, basically the steeper the cliff. England took four generations. Germany took three. Japan took two. So they've all got some buffered edges to their demographic codes.

But the Asian tigers, the China, crazy ramp up of the post, obviously isolated communist era, those guys are all just running off the cliff. A billion Chinese are going to be 600 million within a few decades.

That completely erodes the worker capacity, the economic output, GDP, et cetera, et cetera, and potentially even the viability of these nation states.

Elon Musk has been tweeting about this. Of course, it's mostly not been taken in context. It gets sensationalized with, he's having all these babies with all these women and that kind of stuff. But how do you reconcile those…

Because to me, when Elon's like, we should have more babies, I think what he's really saying is, we should have more super genius Montessori kids and less dirt poor, starving kids in Sub-Saharan Africa.

I mean, I'm absolutely putting words in his mouth. They may not be right. But when you've got a demographic critique like Zion's and just think of yourself as the CCP or any government…

Sweden, I think, or somebody in Scandinavia is actually trying to sponsor dirty weekends for couples. Literally, we'll pay for you if you conceive over the weekend because our demographics are tanking as well.

So it's clearly a almost national security issue of, how do we have more people to keep this train going?

How do you reconcile that with the flip side, what we just talked about, which is potentially that a stable carrying capacity of this planet, absent excess fossil fuel wattage, is half a billion, a billion, two, take your pick? But somewhere radically less than what we're talking about now.

It's clearly based on near decimation, not extension or expansion of our global population. Because it just feels to me like, the people who are focusing on each one, aren't including the other perspective.

Daniel Schmachtenberger: Correct. When you make a system that is dependent, when you make a global system where we then depend upon it, but it is dependent on a unrenewable process, you're going to have a problem. Because at a certain point, the substrate that that system is depending upon, says the system has to change. Yet, we've made a lot of other things dependent upon that system.

That's what we have, in a lot of ways right now. We can't not keep growing global GDP exponentially, with our financial system being what it is.

It was supposed to be that there were goods and services that were exchanged. And then we figured out a way to make an amount of currency that indexed the value of goods and services.

It's very hard to figure out exactly how many goods and services are being created. So at a certain point, you get central banks and a Bretton Wood monetary system. You just basically say the financial system now has its own mechanics.

People are lending money. Whether it's just putting it in a savings account and the bank does that, or it's traditional debt or whatever, they need interest on that money back, because they're letting someone else use their productive capacity rather than use it themselves, in which case they would grow it.

So the financial system now has its own physics where it has to grow, just to keep up with interest. Not also factoring the other reasons it has to grow, in terms of more advanced types of debt and money printing and things like that, but just to keep up with interest. Compounding interest means you get an exponential growth curve, to not deflate the value of that currency.

That means you have an exponential demand on the physical earth, which both means energy and raw materials. And given that that's been coupled to population, that also means a continued demand on the people being able to do the people parts of that material economy.

Jamie Wheal: To do the people parts, that's nice.

Daniel Schmachtenberger:  Well, Elon talks about a factory being a cybernetic system. Ultimately, this whole thing is a cybernetic system, which the tools are complicated, not complex. They don't make themselves.

Jamie Wheal: Just define cybernetic system.

Daniel Schmachtenberger: A complex control system, an artificial control system that in this case, involves human elements and technological elements and coordination of those human elements.

As soon as you realize the people make the tools, the tools don't make themselves. They're not self-replicating, the way that animals or people or plants are.

The people make the tools, and then the tools predispose patterns of human behavior, to get them to make more tools, 'cause the people who make and use the tools get ahead, relative to the ones who don't. So then everybody has to and et cetera.

So the people make the tools. The tools end up controlling patterns of human behavior. That system as a whole, that is the tool human system, the tech stack human system is a complex cybernetic system.

Nate calls it the super organism. Benjamin Bratton calls it the stack. We've called it the paperclip maximizer or Moloch.

Since humans are part of that cybernetic system and that cybernetic system has an embedded growth obligation... our friend Eric Weinstein's term that he popularized, they're embedded a growth obligation to exponentiate its output, you need an exponentiation of the population.

Well, that was easy because it was being built on a time where the population was growing exponentially. But then you required exponential growth of all those things, and you get to a place where the limits of growth... which is why that was a good term for that book back in the day, the limits of growth don't let it keep doing that thing.

But now, what would it mean to have a financial system or a way of mediating exchange of value and property rights and access to things that doesn't involve interest? There's nothing in play to change that, currently. So yeah, we-

Jamie Wheal: Where's Bono when you need him? Forget Africa. We need global debt forgiveness. We just need a quick reset and then do-

Daniel Schmachtenberger:  Now, before the removal of Glass–Steagall, that might have seemed like a good idea, before radical deregulation. Because the idea, hey, there's radical wealth inequality, it must be all of the super duper billionaires who have the money and are loaning it to everybody else.

So let's just do debt forgiveness and only the uber wealthy will get fucked in that and everything else will be better. That's not true.

Now, because of the deregulation, everybody's retirement funds... when we're talking about government employees and teachers, their 401k is being lent out.

So if you do that debt jubilee, everyone's retirement is gone. They don't know how to run their life and work as old people. So, the system is really set up to be hard to change.

Jamie Wheal: Yeah. Well, listen, I want to talk to you about this because A, this is deep node. So, anybody listening along, we are in deep nerd land. I mean, I love it. I love to get to ask you the kind of questions I don't get to ask too many folks.

There's that film that came out on Netflix. It's one of David Attenborough's, a couple of years ago, called Breaking Boundaries. It's a complete departure from his normal, look at all the pretty animals.

It's with Johan Rockström, from the Potsdam Climate Institute, which is arguably one of the most credible institutions in that space.

Rockström is basically just describing nine parameters or whatever it is, around the world, tipping point gauges to be watching. Where do we stand in each of them?

He then came and presented to our friends, Amory Lovins and the Rocky Mountain Institute folks.

One of his and therefore RMI's assumptions as of last year, I think was, we are effectively going over the falls in the barrel we came in.

So much of your critique and analysis is, this is self-terminating, this is self-terminating, that's self-terminating. These are all inescapable generator functions of demise. The only solutions are for us to reinvent education, reinvent capitalism or economies, reinvent politics and governance.

Run the thought experiment with me. I hear what you're saying. The logic is fairly airtight, other than humans have always muddled along, with these imperfect situations and conditions and institutions.

But if you take that, we're going over the falls in the barrel we came in, what's your prognostication?

We are who we are. We have our goofy, dysfunctional, captured government. We have our wacky inflation-incented economics. We are who we are.

We are products of our time and place. The inertia to our existing way of living, it exceeds our trim tabs, to borrow one of Bucky Fuller's notions, ability to do this.

So what's your sense, realistically, other than somehow having this confab of world leaders and thinkers that just suddenly decide enough's enough and we need to rebuild everything simultaneously, from scratch? What's your actual sense of capacity to go over the falls and this barrel we're in?

Daniel Schmachtenberger: Yeah. There's a lot more things that we should construct about the other nonreducible things that are not able to continue, that the system depends on, they're self-terminating, because then it gives us a even deeper sense of what has to be reconstructed.

I won't go down these rabbit holes, but let me just mention briefly, we were talking about the problem of the financial system having an exponential embedded growth obligation that is connected to a materials economy. Well, you just can't do that on a finite planet, with a open loop materials economy.

The financial system also, even deeper, if you have your assessment of value, all quantitative and a single quanta, dollars, then the value of everything that is qualitative gets converted to quantitative. What will the market bear? All different metrics get converted to a single metric. How much of this person's creativity, this original Rembrandt, how many copper, how much for this animal's life? It's all dollars.

What that means now is, the real question of, can I convert tons of CO2 into saved rhinos or less parts per million of mercury in the ocean or higher quality of life for old people? No, they're fundamentally different things. Those metrics are uninterexchangeable with each other.

But when you determine the market based value of all of them in a single metric, then rather than maintain each of them with their own accounting and logic that they need, where you maximize extraction externalization is where the logic of the dollars ends up operating.

If I value something that doesn't give me more dollars, I lose economically, to those who only value things that are quantifiable, extractable exchangeable. Then they have more power in the system. So, those that have the most perverse value system, end up winning and determining the system.

The future system won't have something like a single fungible metrified currency. That's really fuckin' different. That's so different.

Because then if you say, okay, well, we'll have lots of different currencies for different things, well, if there is an exchange between them, then you end up still getting one quantified, I mean, one fungible thing.

Jamie Wheal: Choose your reserve currency, that everything has a season.

Daniel Schmachtenberger: That's an example again of, oh, the future things we're talking about are really, really different. There's so many of these, and it's important if we're going to try to talk about how to design a future.

If we were just talking about these issues that we've talked about so far, regarding the global mega machine and energy and that, and we weren't recognizing that we're within a single digit number of years from tabletop gene synthesizers in basements... which actually, I think gets us to globalizing catastrophes faster than these other globalized things do, we wouldn't have to focus on regulating exponential tech that fast or the exponential tech applied to AI, applied to population-centric warfare and mind control.

So it's all this shit at once, and that's the important thing. Most everybody gets overwhelmed with one of them and then over-indexes everything on that thing. But the solution to that thing makes other problems worse, most of the time.

Jamie Wheal: Sure. But this is the Schmachtenberger effect. Right? Listening to you and listening to your credible running down all the threads and all of them being self-terminating, my sense is, let's say just as a listener, is I either slit my wrists, or I gather my loved ones and I fuck off to the deep woods because none of this is solvable, given how we've done so far.

Daniel Schmachtenberger: Yeah. I think it is solvable, actually. So, thank you for bringing this up. Something I find interesting in the current thinking environment, as you mentioned, of everybody being deeper and not clear thinking rabbit holes, one of the other things I've noticed is that, it's very common for people currently to go from high epistemic certainty, to when that's challenged, adequately epistemic nihilism in one step.

They're super certain about whatever the thing is, regarding COVID or climate change or institutional racism or whatever. Mostly, they're closed to having their thinking challenged at all.

But if you do adequately challenge it, including the solution that they have hope in... Because as you know and as you bring up, people need hope. If you challenge the hey, renewables aren't going to get us there in time to solve the climate change problem because dot, dot, dot or whatever it is, they go to, I don't fuckin' know what's true.

But not like a, hmm, that's interesting. Let's work on that and figure it out. But like a give up and nihilism.

You can't go from certainty to nihilism and possibly have any chance of being a part of the solution or being part of the solution.

You have to be able to say, I don't have anything close to adequate certainty, but that also means I don't have adequate certainty that the whole thing is inexorably fucked.

I should not have certainty about what the right answer is, but I also shouldn't have certainty there's no answer. Those are both unfounded.

So I should say, the unknown unknown set is pretty large. There's a lot of shit I don't know, and there might be a lot of solutions in there. If I don't know the problem space well, I won't even know how to recognize them, because I'll think the thing that just solves the tiny part of the problem space I currently know, is a good answer and externalizing harm somewhere else.

I need to actually be willing to engage in understanding the problem space better, so I can recognize good solutions. And then I have to be engaged in working hard to figure out what those might be. So, I think people's willingness to sit in not certainty and work harder, is really important. 

Jamie Wheal: That's just worth emphasizing, sort of agnostic optimism.

Daniel Schmachtenberger: Yes.

Jamie Wheal: I don't fuckin' know, and I'm keeping the faith, regardless.

Daniel Schmachtenberger: Faith is a key word. I don't think we have any chance without something like a faith. It's not a faith that God will figure it out, and I don't have anything to do.

It's not a, 2012 will make us all higher dimensional light beings or Jesus will come back and fix it, or the new version of Jesus will come back and fix it. Which is, the AGI will become super intelligent, super benevolent and fix it all for us.

Those types of faith are totally pernicious and dumb. The atheist movement gets rid of them for the obvious, smart reasons. They're a child's faith.

But there was a kind of faith that every innovator had, that a solution might be possible, that a new discovery might be possible, that had them work in the presence of the unknown so hard, to find something.

It's a faith in possibility, not in certainty, not in a specific thing. There is no such thing as real inventive innovative work, that doesn't have a faith at the center of it.

I don't know if this is exactly true, but I heard from Jean Houston once, when she did her... that her PhD thesis helped kick off the Human Potential Movement back in the day. And that she studied 50 of the top creative polymath geniuses at the time, looked at their psychology and wanted to see, do they have anything in common that the rest of the population doesn't have in common, that is developable, that could be the basis of a future education human development system?

She studied Bucky Fuller and Margaret Mead and Linus Pauling and Jonas Salk and the fuckin' great luminaries from that time. The first of the qualities she said she noticed that they all had in common, was that they all had a extremely positive emotional valance associated with failure.

You can see that there's a kind of sad, superficial version of this that Silicon Valley has taken, called Move Fast and Break Things.

But there was a deep thing. She said when she talked to Bucky and asked him, "What does it mean if you're failing?," that he started rocking in his chair and smiling and he closed his eyes. He said, "It means that there's an intimacy with nature that I have, because I am working at the edge of human knowledge. I'm working on touching nature in the cosmos, in a way no one ever has. Because if people already knew how to do the thing, I wouldn't fail. I would just implement the recipe. I'm only failing 'cause I'm working at something nobody knows how to do, and there's something so ennobling and honoring and beautiful in that."

That's a different relationship to failure than most people have. That's what makes a Bucky or a Jonas Salk, versus most people, and if we started to have…

Now, this is one of those things, in terms of population centric. Do I think that those people, rather than representing extreme outliers on the right of the bell curve, that we could actually move the whole bell curve, to have more people functioning in similar ways? I do believe this.

Jamie Wheal: Well dude, interestingly random and apropos, this is the new Bucky Fuller biography, Inventor of the Future, which just came today.

I just read the quick book review, and it turns out that he's fundamentally... Most of the prior biographies are kind of hagiographies, by disciples.

This one is more of an even-handed one. It's just a very, very mixed, more complex picture of Bucky Fuller the man, who turned out to potentially be a bit of a gaslighting, womanizing, cantankerous misanthrope as well.

I mean, if nothing else, to give ourselves broke-ass permission to do our partial humble, fallible bests and not think that we can't all attain some of those qualities that Jean Houston was describing, because everybody's banged up, broken and partial and still can make a positive dent in the universe.

Daniel Schmachtenberger: Yeah. I think that's super important. I haven't seen that book. I'd be curious to see it. I definitely had a process with many of my heroes, growing up, whether it was Mahatma Gandhi or MLK or whatever, that when I got the more real politic biographies, it was less purely rosy than the thing I had thought.

That was actually really good because of the point you mentioned, that someone can be pretty fallible, have a lot of damage and still actually do incredible work.

That doesn't mean that we should not work to take responsibility to really try to clean that shit up in ourselves. But it does also mean that we shouldn't take the presence of those things in ourself or others as a limiting factor of the possibility of meaningful contribution.

Jamie Wheal: Yeah, absolutely. I mean, that's why we kind of playfully coined that term 80/20, awoken to broken, the fact that we'll always be. That is our Planck's constant, is just 20% fucked up human. We will always be that.

Then the plug I always make for Nancy Koehn's book... She's the Harvard Business School historian. Her book, Forged in Crisis, which is Dietrich von Bonhoeffer, Rachel Carlson, Frederick Douglas, Shackleton and Lincoln. Just this really beautiful contrarian set of biographies of their moment of crisis and challenge and why they're in the history books and that they were just scared shitless, flying blind, going on their best guess and just the profound humanity-

Going on their best guess and just the profound humanity of these historic figures, and how they shaped the course of history, but they did it from incredibly humble mortal fallible positions. That after the fact, after that all the dominoes toppled in the right way, we can look and see as destiny or history, and then absolve us from being part of that same process. So it just feels really important to ground ourselves in our broke ass humanity, and willingness to try anyway.

Limitations and Possibilities of Transcendence

Daniel Schmachtenberger: It's interesting because a lot of the people listening here probably are pretty interested in their own growth as people. That's why, whether it's how do they eat healthier and take better supplements, or do the right meditations and psychotherapy, or study the right topics they're interested in how to be a more whole complete person. Part of the motivation of that is it's just intrinsically valuable for the quality of life and how they contribute to their family, but also the idea that they will be in a better position to contribute to the world from there, which is true. But it's also interesting to notice in those biographies and even more extreme cases with Churchill and so many others, you see people that contributed in really profound ways who were more broken than average. Not very personal development oriented at all.

There is an interesting thing where if someone puts themselves or finds themselves in a position where things of real consequence can be influenced by their behavior, that can up level the person in really profound ways. It still doesn't clean up all of those things, but it can clean up some of them and or motivate them to transcend it enough. Yet, it can also end up, having that all the good things they're doing are a little bit damaged and polluted by some of those things. Simultaneously,  there is no end to the personal growth work to be like, "Now I have arrived and I can start making contribution to the world," because I can unwind my own patterns kind of forever. So there is a thing where it's like... The way I orient to it is there's a sense of Dharma of how my life is connected to, dependent on, and thus must be in service to the greater world. And what is my calling there? And how do I get actively engaged in that? 

Then a major part of my personal work is where is there shit in the way of me doing a really good job? Of course, and whether it's a really good job in my family or in my community or in whatever the larger scaled work, if I have that is. So it doesn't have kind of the infinite solipsism of just personal work, but it also doesn't have the working on the world in a way where your own blind spots keep getting in the way. One thing I would say is everybody has blind spots by definition. I can't see some things about myself that other people can see about me. If you want to not have your life... If you want to not harm things that you care about because of blind spots you can't see, fucking get people that you trust that know you well to point out your blind spots continuously.

That's one of the very best things I can say, is no amount of self inquiry though, you should do lots of it, we'll give you the same thing that other people's feedback can. This doesn't mean listen to everything that every hater says and take it all to heart, but it does mean find people that know you well, and who'll give you really honest, full feedback. But there's an interesting thing. Everybody has blind spots. It's really good to get a feedback mechanism and work on them. It's not true that everybody has defended blind spots in the same way. Personality disorders have deeply defended blind spots, and that's the thing. There's psychological trauma that creates it. But I would offer, there have been people who've contributed a lot to the world with major personality disorders, because narcissism makes you want to do great things. Sometimes external validation seeking is a strong fucking motive to do great things.

Jamie Wheal: Bain and Company, the management consulting firm looks similar to McKinsey, has actually... They've had explicitly in their internal documents seeking early candidates on track for partner that are a little destabilized and have a little halter fill. They actually want people who are seeking external validation to compensate for internal lack, because they're going to work harder.

Daniel Schmachtenberger: You want that in the executive positions and entrepreneurial positions, you want people that are codependent enough to not have boundaries and just take everything that you put on them in the employee positions. So there's a bunch of interesting things there. There's a very cynical, but pretty true take that some smart tier one venture capitalists, Silicon valley venture capitalists, that the main thing they're actually betting for is not the technology, but the intelligent sociopathy or dark [inaudible 01:44:07] of the founder. They will figure out how to win at all costs while pretending that the real thing that they're investing in is some kind of technology-

Jamie Wheal: Making the world a better place.

Daniel Schmachtenberger: Yeah. But if you want to win, then you bet on people who are really dedicated to winning.

Jamie Wheal: Which is externalities of the relational web. I'm willing to take the profit of my social capital in my exchanges and never mind the cost of a solid thriving social web.

Daniel Schmachtenberger:  Right. We have companies set up to privatize their gains and socialize their losses. The there's a perverse incentive on the structure of the innovation very, very deeply. But I was going to say, someone can do a lot. In fact, there might even be more people who've made the history book with deep personality disorders than not. That said, that's where you start getting really questionable. There were a lot of positives and a lot of deep negatives mixed in with it. I would say in terms of how you want to... How people want to orient themselves and who else to trust, because everybody has some brokenness. But those who are really willing to look at it, want to look at it, don't give very defensive reactions when it comes up, and then take real accountability to change it, are... Doesn't mean they'll be perfect, are definitely more worth trusting than those who have a defensive answer or an excuse whenever those things are brought up, and don't actively seek and receive those well.

So I would offer yes, except that even in the presence of broken, as you can do meaningful things. And wherever you get feedback about broken is really work on addressing it. In fact, actually actively seek input from people close to you, where you don't get super devastated because you're like, "That isn't even me in my essence. That's weird shit that happened in my childhood. That's whatever it is." Me in my essence is the self that wants to transcend that. It's the me that doesn't want to be an asshole who feels bad when people tell me I come off as an asshole sometimes, or whatever it is. If that was really me, I wouldn't even be bothered or want to change it. I'd be a piece of it. The only reason I'm even bothered by that is because it's dissonant with my true nature. So I want to see it, so I can take responsibility to work on it.

Jamie Wheal: Well yeah, you gave me some feedback from that recent gathering where it was effectively some version of, I sort of seemed like not a great listener, kind of ADD, and often changing the topic or spicing things up. And because that, I was like, "Yeah, totally that's me." And I didn't... There was zero... I was like, "Oh, bring it." If this is it, if this is all it is, I could take this all day long and I can for sure apologize to anybody that didn't feel heard in a conversation. But I was like, "Oh no, that's that's me without a doubt."

Daniel Schmachtenberger:  But you and I actually made agreement as friends that if we had that experience with each other, or if we heard other people bringing that stuff up, we bring it to each other. I think that's actually a minimum requisite of real friendship. A critical thing is, "Yeah, I'll actually be honest with you."

Jamie Wheal: Yeah, and I am grateful for that. For me, I always just think of physical metaphors, but I appreciate being able to bow onto the mat with you and train even at full contact. Although I don't think we very rarely get to full speed, but even relatively vigorous attack for us to train with and learn from. Versus where we're getting into interpersonal challenges. Lets just talk about the global X risk sense making, or all the communities that kind of overlap in our world. My sense is we're terrible at playing well together. I held out somewhat naive assumptions that, "Oh, if people could only do circling or some other dialogical process, or if people learned and take, fill in the blank, of NVC, nonviolent communication, NLP, neural linguistic programs, circling, authentic, communicating, whatever, whatever we would all be better at it."

But it seems like all that does is just make the obvious glitches in our personality structures and or power dynamics and negotiations, it just sends them underground, and it just wraps them in swaddling cloth, and it becomes harder and harder to actually get to the heart of things. I'm aware we're going into a lengthy chunk of time. Let me just check to see how much time you have left, because I would imagine anybody plugging into this channel will be psyched to hear your conversations. So I've got three things left, and I'll just choose whether we do three or we do one.

Daniel Schmachtenberger: Let's do them. We're here.

Maintaining Existential Faith

Jamie Wheal: Okay. The first then is, how do we maintain good faith? I was just... There was some nature program on pods of orcas, and it was just talking about how the orcas have additional folds in their brains for social cognition and coordination. On the one hand, you've got these badass apex predators, they kick ass, they're smarter than great white sharks. They are literally kind of arguably the kings and queens of the ocean, and super smart citations, big brains, and have extra folds specifically for social coordination. We feel like, at least when you talk about the kind of "thought leaders", clever people, Asperger kids building models of the universe, galaxy brain, take your pick of what indicators you want to describe. It feels like there's a bunch more lone wolves than orcas.

Sort of the lone wolf, it could be cats. It could be tigers, lions, whatever. But they need 400 square miles of territory for themselves. And they might come up against each other at the edges of their territory and they might sniff each other out. But end of the day, they cannot become pack animals, coordinated pack animals. They will fight or they will retreat to the center of their territories. So what is your sense? This is a goofy tee up to a very heartfelt question, which is, how do we become more like orcas and less like lone wolves so that we can coordinate more effectively together? Because on the psychosocial level, that to me has been a staggering and saddening fail point that I've just watched again and again, over the last 5 to 10 years. And we're not even into the hot stuff yet.

Daniel Schmachtenberger: I think as external pressures get harder, the coordination gets easier in many ways. I think you wouldn't have seen some of the alliances that happened in World War II, if it wasn't for World War II. That's one thing. I think when you're... The specific people you're mentioning are mostly, not exclusively, but mostly either non-institutional thinkers, or thinkers who are focused on the topic of what's wrong with our total global institutions. Even if they happen to be at an institution like FHI or FLI or something like that, they're still in a different position than most of the institution. Which is they're looking at what is wrong with the world system, including the academy system, the research system, whatever. The people that are most oriented to kind of see what is wrong with the system are probably not people who... It's different than the people who develop, say inside of an academic system, as thinkers, are looking for iterative improvement, but not looking at it as fundamentally broken, and thus are conditioned in a way to be able to collaborate through peer reviewed journals and through conferences and whatever with other people in their fields.

I don't... I would say academic collaboration is also really fucked, but the case that you're bringing up is a uniquely challenging case. Because I think anybody who is saying, "Everything that we know is inadequate," then typically if they're oriented to do that, they've spent a long time thinking about, well, what would inadequate way, so there's not some existing traditional frame they trust that they will say can adjudicate our differences with each other, or can give us a basis to collaborate. Now, sometimes that's because of a weird disposition they had to begin with. I'm smarter than everyone. I don't trust anyone. I always know what's right.

Jamie Wheal: The woundedness of the gifted child, basically in a nutshell.

Daniel Schmachtenberger: Sure, yeah. And this is one of the tricky things, is if you have someone who's really smart and sees a lot of things that other people don't see, and they're used to every time that I brought this up with people and they knew I was wrong, I ended up proving that I was right. They have real examples of that. It then gets over generalized where it isn't true, and they can't see their own blind spots. And where people give them feedback about their blind spots or whatever it is, they're still sure that they're right and that everyone else is mis-assessing. I would say sometimes those people had a disposition that brought them into that place, and that's challenging. The other thing is even if they didn't. So one thing I would say is what brought someone to the place? What was the generative process inside of them that brought them to the place of looking at what is wrong with the world and how do we need to redo it from scratch?

If someone is in that place, the process that got them there is worth looking at. Then the next thing is, what effect did that have on them? Independent of what cause brought them to that effect. What effect did that have on them of looking at the actual, real tragedies of the world, the catastrophic problems, not thinking that the authorities have it dialed in, trying to take responsibility on it. There end up being a lot of frustrations and pressures and scars and sadness and whatever that comes from that. Then if they're trying to make sense of that, it's probably because they're trying to do something. They're probably overwhelmed busyness wise, so they just don't literally have the time they need to model sync with other people. So there's a number of factors that are involved. I would say I've actually had the experience of feeling very in heartened.

Jamie Wheal: In heartened, okay.

Daniel Schmachtenberger: Yeah. I've had the experience of being... At first, when I first started diving into the planetary boundaries, catastrophic risk, fundamentally new thinking on governance, economics, culture, tech, etc., I would say I was naively hopeful that everyone who stated that they had those values, did and would work nicely together in the presence of the magnitude and timeline of the issues. And I would say I was pretty devastated by bad experiences in that way. But then that was par for the course, because I was just under informed, and that was part of learning. Then I've also been in heartened at how many people who have spent their whole life thinking about things are willing to update their frameworks, and do want to work together and are motivated for these things, even if through some traumas, also out of real genuine care and love and desire for the world. And how, as things are getting worse in some ways, I'm actually seeing more genuine collaboration emerging. So yeah, I experienced both of those.

Jamie Wheal: Yeah. That's interesting as you described that, because I've had a very schizophrenic experience too. I'll go to events, conferences, gatherings, and it's usually one or the other. It's either, "Oh this is an irreparable shit-show. Everything I was pinning my hopes on is turning to ashes. Fuck." I go home just being like, "Okay, now I'm going to the woods." But then there are equal a number of opposite gatherings and where I'm profoundly heartened and inspired, and it's on... And I think... I'm just thinking this out loud right now. But I think the difference is the people that really give me hope are the people who have put their stake in the ground at any level of the stack. It could be education. It could be animal welfare. It could be sustainable farming. It doesn't matter.

It could be pick any single spot, they've found their place and they're doing their part. There's almost always a self effacing humility, even if they're doing global scale things and they're just phenomenally inspired self effacing humans. The ones that typically break my heart, churn my guts, are the ones who are holding out some inflated sense of their importance or their role, and just never get off the dime on coordinating. So in some respects, it's almost people overreaching for too much complexity, too much scope, too much mandate, and yet they haven't really put their stake in the ground, versus the people who are simply going to where they are most called. And they are in some respects sort of surrendered to their Dharma, to the part that is theirs to do. Does that track for you?

Daniel Schmachtenberger: Yeah, it does. There is a correlation, but it's not an exact correlation, so I want to add a dimension to it that I see. When you said something about where you put your hope or where your hope is derived from or something, and I find that the thing where my hope is derived is totally invisible. I can't go to a group anywhere and say, "We're going to make it through the meta crisis because of what I'm seeing here." Because this person who's doing unbelievably beautiful work in conservation of wilderness, doesn't even know about AI risk. Their conservation work is completely earnest and amazing and has saved all these species and nobody even knows about them, but it's also not on track for all the conservation that needs to happen even close. And they don't even know how to think about that. They don't know how to think about a financial system that needs to turn all that wilderness into paperclips.

Jamie Wheal: I think of Joe Brewer right down in Columbia, just posting these endlessly hopeful, but honest, sometimes raw updates on just trying to restore a little patch of forest in Columbia. I'm like, "Thank you."

Daniel Schmachtenberger: Similarly, there are people who I think are doing very good work on, say an area of synthetic bio risk. And the fact that the work they're doing isn't at the right rate equations for what's happening globally, or doesn't factor some of the underlying game theory that is needed... I find hope in those things, but I don't find anything comprehensive hope, where I can say... And even if I take a few of them, I don't say, "This plus this plus this equals adequate solution in time." So the hope is more that there are more things that I'm seeing that are not sufficient, but are necessary, emerging in more places that the convergence towards sufficiency is something I can feel, but I can't see it.

Jamie Wheal: Can we talk about that? Can we unpack that? Because that was the thing I wanted to end with, was what we had talked about in our prep call for this, which was what breaks your heart right now and what gives you hope? Whether we've covered the what breaks your heart, you could touch back on it if relevant. But if not, you have said to me... And I, in my most intimate thoughts, conversations with my partner, Julie, and close, close friends, I orient right to some kind of upbeat optimism, even though my situational assessment is fairly grim. You've described some long arc of history, consciousness, cosmology, something, something, something where you kind of have a rooted faith that it works out.

Now walk me through that, the how and the why of that. Then also, because I just wrote a book taking the air out of all hockey stick rapture ideologies, how are these convictions of ours that it all works out in the end, not just blind faith? Not succumbing to a happily hopeful ever after rapture ideology of like, "And then some inflection point. And then we're all saved," whoever's the elect and identified as such. What's the difference between blind faith and radical hope? And how do you steer your way between those two? And it's fine if there's an appeal to metaphysics, because as long as we tag that transition, I'm happy to play into that space as well. I don't want to be just a reductionist, materialist, nihilist, existentialist. Although I can go there on a bad day.

Daniel Schmachtenberger: Okay, a few things. I want to go back and say one thing, because I think what you're saying right now is a good place for us to end. So I want to complete one part, because it'll be useful on the people who give you hope versus the ones that break your heart. I was saying there are a lot of people where I feel hope in terms of the importance of what they're doing. The fact that what they're doing is necessary, is effective, is well motivated, is any of those things. That the collection of those things is not sufficient. There's still a big delta of what has to happen, that isn't happening. I can't see the hope in a place, but I can feel something like a convergence, and I can talk about why I feel that.

But the other thing was that you were saying that there are people who kind of won't put their stake in the ground because they have a self inflated importance of what their role is. I think there can be a correlation that those who are thinking about more complexity might also have special types of ego challenges, because they want nothing to be outside of their domain of scope. They don't want to trust that anyone else has other issues, that they need to have the totalizing solution for everything, and or their work has to include and transcend everyone so that their work is the most important. So whether it's that they can't trust anyone, or whether it's that they want to be at the top of the stack of importance or whatever, there's no question that certain types of psychologically problematic dispositions can orient people towards thinking about the whole. And someone can also think about the whole in long term because of being oriented towards abstraction rather than the physical instantiation, because they're just pretty disconnected and disembodied, and they actually don't feel real empathy.

They have just some abstract sense of utilitarian calculus or whatever. There's plenty of reasons that someone can be a big picture thinker for not great reasons. But that's not always the case. I also think that some people doing that work well is really important, and not more important than people doing the conservation work or the bio risk work or being exceptional nurses, but also critically important. I don't necessarily expect them to have their stake in the ground, be advancing something that has some clear metric in some part of it. It might be advancing insight about why those other processes are insufficient, but is still helpful for that other field to grow, or how they're interconnected, which is helpful for those fields to communicate with each other. I want people who are thinking at high levels of abstraction and I want other people whose sleeves are rolled up working on things. I actually want all those things. I have seen plenty of people who are in a specific domain, not thinking about the whole thing, that is a sleeves rolled up domain, that still have plenty of problematic ego dispositions.

There's plenty of dark triad, middle managers in major NGOs that are focused on the environment or animals or whatever else. I don't want to have the sense that if someone is thinking about things at scale and communicating that knowledge, that always equals not that important or not good at collaborating, or not putting their stake in the ground. I think both are really important. Actually, having recursion between people doing on the groundwork, other people thinking about that theoretically across more spaces, giving feedback, seeing if their feedback is actually implemented on the ground, that type of recursion is really good. That's where my faith is in, kind of collective intelligence and curves of collective intelligence that don't live in any of the parts, that really are behavior of the whole, not any of them. What's interesting is collaboration is possible with people's output, even if they suck at collaborating. They can still-

Jamie Wheal: You mean if they create artifacts, if there's bodies of content or information that are then accessible in the public domain?

Daniel Schmachtenberger: Right. There are some brilliant scientists and thinkers who are impossible to collaborate with, but who put information out that helps advance collective intelligence. They have a role to play too well.

Jamie Wheal: Yeah, to me, I even jotted this down, is this notion of sort of agonistic confederation. I think you and I exchanged an email 18 months ago where I was sort of like, "Hey, the funny thing is about all of our different Dharmas, is that virtually none of us would sign up for each others'." We sort of... I'm like, "Dude, your [inaudible 02:05:50] project, it's amazing, but I don't think you're going to beat QAnon with a better Wikipedia page." Or Tristan working on the social dilemma and this kind of stuff, I'm like, "Thank you. That's fantastic. I think it's tragically doomed." And I think we probably think that about each other's stuff, but at the same time, without a doubt, we'll be fighting the good fight and scanning the battlefield at the end of the day to see your banner still flying, and we'll be deeply cheered and galvanized if it is.

So to me, that notion... And maybe this goes all the way back to how do we reinvent democracy, is that notion. And I borrow this from John Gray at London School of Economics, but that notion of agonistic liberalism. The idea that we're not going to get to come by our consensus, that any utopian, singular monolith descends to fascism in its enforcement. And that can we tolerate the diversities of our Dharmas, and can we celebrate the differences? Can there be a confederation of the humanity involved, while there is absolute multiplicity of approach? Can we celebrate that a little more, versus creating factions and divisions because we don't agree on the 11th decimal point of our proposed solutions?

Daniel Schmachtenberger: I think it's easy to say yes to that enthusiastically at a superficial level. And I think there's a lot of devils in the details, because the idea that everybody who is pretty certain that they're right about something, that history is unkind to their certainty, and that it turns out that they were more wrong factoring more things that were unknowable at the time. So the idea that at least pluralism that can at least make us less certain about any particular subset is good. Okay, sure. I totally agree with that. That said, some people will have relatively better certainties than others where the decimal point really matters. On things like, can we do... This is very alive right now. Can we do a limited nuclear exchange that won't create nuclear winter and won't lead to full blown, strategic escalation or breakdown of global supply chains? And that's actually a good way to go, or can we not do that?

The difference... That's not like one of those, "We can just disagree and live and let live on those differences." It's like, "Okay, sure. We'll go with your model that the nuclear winter doesn't happen. Go ahead and let that many nukes fly." "No, no, no, no, no, you fucking got your model wrong. It is really consequential." Or, "I don't think that AI is going to be that problematic if we let it out into the wild, because there's such good near term market purposes. I have strong, motivated reasoning to want to let it out into the wild." "No, no, no. I'm pretty sure you're about to kill everything, and you're missing something really critical." So there are places where there's shit that's really consequential, and those differences are like, "Can we just live and let live? Or is the consequence of some of those things, stuff that won't live and let live. It'll actually have second order effects that are really profound.?" Historically we couldn't fuck up as bad as we can fuck up now. We could fuck up as quickly or as irreversibly, so you can see why there's more of a desire for certainty, but then there's also more motivated reasoning, too much information and polarization that makes it harder to get that.

Jamie Wheal: Well, I think just even in pop culture, you can trace this with Black Panther. You can trace this with the Marvel universe, the anti-heroes these days. And I think there's a... It's the sequel to The Da Vinci Code. I think it's called Angels and Demons, and it's some tech bro who's convinced that the world is overpopulated and we need to actually crash the population to save the other half. And Thanos famously with his snap wipes out half of the universe so that the other half can live. Black Panthers, I forget what, had killed Killmonger. His critique is effectively a Black Lives Matter slash Black Panther's assessment of racial inequalities. None of the anti-heroes in our movies... In fact, quite often the anti-heroes are giving more voice to the current situation than the nominal heroes.

There's been this kind of postmodern play back and forth in our mythic mythic storytelling these days, where actually the villain actually has run more of the facts. So the question, back back to do we simply live and let live? A concern is if there's anybody, particularly if they're decoupled from relatedness, from family, from heart and their humanity, and they're excessively in their heads, and they conclude there is a hundred percent certainty of a mass casualty event if I don't do something. Then there's a whole lot of extreme and potentially sociopathic behaviors that they might see as 1000% valid and actually pro-life over the longer term.

Daniel Schmachtenberger: This is the problem with... This is the biggest problem with utilitarian ethics. There's a million problems with utilitarian ethics, and there's also a million things important about it. I would say, and this is beyond the scope of what we can do, but I would say that virtue ethics, utilitarian ethics, and deontological ethics all have a role. A kind of complete ethics requires various of those systems. But lets just talk about utilitarian ethics for a minute. This bad thing is going to... I'm responsible both for the effects of my action and inaction. If I could have done something and didn't, then the bad thing happened, I have some culpability in that. Good, that's important. But, got to watch out.

There are current inexorable, bad things going to happen. I can do something that has some badness in it, but less badness than the total badness that will occur if I don't do the thing. Therefore, I'm morally obligated to do the thing that has the some badness in it. That's all those anti-heroes that you're mentioning. It's not that would never be true. It's not that would never be true. Anyone who has made an act of self-defense that was warranted and harmed someone to prevent their family from getting harmed, did that calculus. And it might have been warranted. Or a country that did, or whatever. Simultaneously though, when we start doing it based on population numbers and limits of growth and IPCC calculations and things where slight changes in the calculations do lead into very different-

Slight changes in the calculations do lead into very different outcomes. And there's a bunch of unknown unknowns that I'm not factoring that could change erratically. The major problem with the utilitarian calculus is we usually admit a false basis for excessive certainty of the bad thing that is for sure going to happen.

Jamie Wheal: Yep.

Daniel Schmachtenberger: Because I'm so certain that bad thing's going to happen, I am now morally obligated to do this other thing where I also have a false sense of certainty that it will work and be less bad. And so it's the unwarranted certainty that's the problem there.

Jamie Wheal: Yeah.

Daniel Schmachtenberger:  Not the idea that sometimes you have to take certain costs to serve the larger good. It's the certainty around those calculus. It's not only that, there's also do I weight certain things that I care about more than other things and whatever...

Jamie Wheal: Which goes back to what we were talking about 10 minutes ago, about the wound, the trauma of the gifted child. The idea if I've always been right, but wronged, my tendency to presume a hundred percent certainty with my own models and calculations is going to be higher than if what you were suggesting 20 minutes ago, which was its relatedness to our blind, it's relationships that give us shine light on our blind spots. Turn us to our humanity and potentially some more humble calculus.

Daniel Schmachtenberger:  If you have some kind of agonist federation, as you were saying, of people that have some real expertise who disagree, but who do trust the process of deliberation between them more than themselves, my faith in that thing goes way the fuck up from any of them on their own or from an uneducated democratic choice.

Jamie Wheal: Yes.

Daniel Schmachtenberger: What is the right choice that the general democracy would make about the safety of publishing synthetic bio information or advancing a particular type of CRISPR? Like who the fuck knows? That's not a lay topic. That's a topic where the scientific knowledge is so detailed that's required for how to figure that out. Or you're going to harden the energy grid. Well, how much hardening do you need based on what the size of the EMP or the Chrome mass ejection is and what's the cost of that relative to the opportunity space the other things you do?

Those are not good things for uneducated democracies, but they're also not good things for siloed experts with their own issues. Those are things for combinations of the values of the general population being factored, the realities of planetary and physics and biology being factored. The experts in hopefully progressively more earnest process and better collective intelligence with augmented computation being able to process a lot of stuff that they can't, and better processes of transparency and oversight to prevent corruption so that the population does have a basis for trust in those processes. All of that together to make a better augmented collective intelligence system is necessary to handle those kinds of things.

Jamie Wheal: Which goes powerlessly close to slitting my wrist to retreating to the woods again. But yeah.

Daniel Schmachtenberger:  Oh wait. Now I want to say something about this. So the whole idea of our liberal democracy, we were talking about separation of the three branches of government, but there was also the separation of government culture and markets, right? That you have a market that has regulation but has a lot of freedom to do its own thing, so that it's not all government controlled and that you prevent monopolies. So that the market has its own checks and balances because of voluntaryism that wouldn't happen if you had monopolies and you don't allow monopoly of religion, you allow lots of religions. So it was a check in balance between religion's markets, states, multiple religions with each other, multiple companies competing with each other, multiple branches within federal government, most of government happening at a state level. There's a lot done to create a pluralistic type system.

It's pretty smart. But overall, the idea is we're going to let the market do a lot of the innovation stuff and have supply and demand and invisible hand guide it. There's all kinds of problems with that are clear now that were not clear at the time of Adam Smith or even at the time of Ayn Rand, but I'll leave that out for now. Let's just say the fundamental calculus is you're going to do that, but there would be a market incentive for organ harvesting and cutting down all the trees so there's no national parks and up stuff. So we're going to have some state that has some law to be able to regulate some things and avoid crime and whatever. So we have a state that regulates the market. The main fucking thing the state does is prevent people from doing things that they have a personal incentive to do, whether a person or a group of people, right?

So market type incentive. Stealing is a market type activity, meaning advancing personal gain of ownership through a particular process. So the main thing the government's supposed to do is regulate market type activity. That only works if the people are then the state's watch dogging the market, the people have to watch dog the state, right? If the people don't watch dog the state and oversee it, which is why it's supposed to be a government of for and by the people, blah, blah, blah. Then of course, if you don't have that, then rather than the state regulate the market, the market captures the state and you get crony capitalism with a revolving door between people in big ag and FDA and blah, blah, blah. Well, we were mentioning that the system is too complex for the people to regulate this or to oversight the state, to be able to see where there are accounting errors and real problems.

So a lot of people distrust the institutions right now for understandable reasons. I'm saying we need to rebuild the institutions in a way that is fundamentally more oversightable and auditable. And I believe we can. But I'm also saying that we have to have a people that don't just bitch about a bad government while not being willing to engage, but who are like, okay, I do want to see what I really think about the models the IPCC is running and about climate change. This is a big deal. I actually want to be able to have some sense. Or I don't, I'd like to proxy my vote to someone who I think really does. I'm going to do a liquid democracy thing. Or I do, in which case, how do I get educated enough? And how do I become aware enough of the biases that I can meaningfully engage where I don't become over certain of the catastrophe, but I also don't become over certain that the thing is currently fine.

And so this is that place of how do we let go of the certainty that we know what the right answer is, but we also let go of the certainty there is no answer, and we're fucked. We orient towards faithfulness that answers might exist and they take working towards, and then we commit to the discipline and rigor to be part of figuring that out. And knowing that we can't do all of it, the discipline and rigor of evolving ourselves to work well with other people that are also doing it so that collective intelligence of groups of people working to do that can enact solutions. That is like the discipline of a people that want to be able to reinstate governance of for and by the people.

Jamie Wheal: Have you said that before? Because that feels as cogent a synopsis of your optimism that I've ever heard you say.

Where Have All the Super Geniuses Gone?

Daniel Schmachtenberger:  Well, I don't know if I've said it quite that way before, but there are reasons why I think that is possible at scale that would be stupid to think if it wasn't for actual certain new possibilities that were not previously possible. I'll say briefly, I don't know if you saw this paper that went around recently on why are there no super geniuses anymore?

Jamie Wheal: The one about homeschooling?

Daniel Schmachtenberger: About aristocratic tutoring.

Jamie Wheal: Yeah, yeah. Yeah. And noting that you've got a strong confirmation bias to believe that paper, but yes, go ahead. Play through.

Daniel Schmachtenberger:  Well, I was not in the aristocracy. I didn't have aristocratic tutoring, but it did actually give me some insights on what, like Nora Bateson is a very interesting person and look at how the fuck she grew up. And our friend AZA Raskin is a very interesting person look at who he grew up with. And so it did give me some-

Jamie Wheal: Nora is Gregory Bateson's daughter for anybody listening.

Daniel Schmachtenberger: Right.

Jamie Wheal: I don't know. Aza's background beyond his tech background. What was his-

Daniel Schmachtenberger: His father invented the Macintosh.

Jamie Wheal: Oh-

Daniel Schmachtenberger: It was like the founder of human center design.

Jamie Wheal: Oh, fantastic.

Daniel Schmachtenberger: And so Aza's the result of that childhood and it makes perfect fucking sense. So what this paper said, and we can include it in the show notes here... And then I went and I checked in with Zach and Scott Barry Kaufman, a bunch of educational theorists. And various of them who'd done the researcher are like, yeah, this is like a third rail topic, but this is a known thing. And even Charles Sanders Peirce did a study himself to say because he was wondering why he was able to make sense of shit across so many fields faster than so many other people were. And he is like, I don't think I'm genetically that different like what the fuck answers it. So who are all the other people who are really, really exceptionally polymathic, what did they have in common? And he came up with the main thing they had in common statistically, though it wasn't one factor, was that they were all the results of some kind of aristocratic tutoring where they had really exceptional private tutors when they were young. And you look at the Dalai Lama, the Dalai Lama gets a kind of tutoring that nobody else ever has. All of the best Lamas training the fuck out of you from the time that you're tiny.

Jamie Wheal: Yeah. He was a punk ass trust fund brat to start with till his tutors got hold of him.

Daniel Schmachtenberger:  And so you're like is the Dalai Lama the result of a particular educational model that just doesn't scale for shit. But if it could scale would be very interesting. And then you look at Marcus Aurelius and the entire first chapter of Meditations is dedicated to his tutors. There's a reason it was dedicated to his tutors is if you're being raised in that period to be the emperor of Rome, the best mathematician, the best grammarian, the best rhetorician, the best historian are your private tutors. And one of the key insights, I remember reading a paper many years ago that was looking at world class mathematicians. And it was saying, what is the statistical correlate we found across all the assessments of what makes someone a world class mathematician? Like on line to win a field medal or something like that. And it said that the clearest thing is that everyone who became a world class mathematician, the most likely thing they had in common was it somewhere when they were young, they interacted or studied with a world class mathematician.

And of course someone can try to flip the causation and say, well, it was because they were so bright someone brought them there. But it's actually the other direction is a very real thing, which is the teacher that I get for math in grade school does not think like a world class mathematician about math. They can teach me shit that somebody else figured out, but they didn't think that way. They don't know how to teach me to think that way so I can get the output, not the generator function. And if I'm with someone who is not teaching me how to memorize a quadratic formula, but is helping me understand the whole history of why it was never understood and then how it was derived and why they derived it. And they're the kind of person that has derived shit like that themselves from scratch, I'm going to get a very different level of education and the type of general competence and thinking.

So it turns out that even up until recently, Bon Neiman and Einstein, both had mathematician governances when they were young, before they went to school. And it makes sense that like, it seems hard for me to learn Mandarin, but one year old babies are speaking Mandarin if they grow up in China. We're very neuroplastic, right? Whatever you're learning. So they were like Bucky Fuller terms that I was speaking as a little bitty baby because that's what my dad was reading to me. And so saying isotropic vector matrix, and I'm the inner accommodative sound like fancy things, but it's just whatever the fuck a baby hears. They're going to a meditate-

Jamie Wheal: Wait, what was the Omni one? Omni...

Daniel Schmachtenberger: Omni interaccommodative, it was the word Bucky used that I remember was one of my early words. And some people who were not Bucky familiar were like, oh, that's a big word for a little kid. And it was just because it was fucking used all the time. But it was also that it wasn't just memorizing it. It was a process of dialogue to understand what does it mean for something to be Omni inter accommodative that all of the generalized principles don't conflict with each other. That across the whole universe, all of the concepts and all of the laws of physics, chemistry, biology, et cetera, are omni interaccommodative. It's actually a really great concept built into a word. Bucky liked to do that, but-

Jamie Wheal: You could just call it groovy. I think that kind of like, doesn't that sort of...

Daniel Schmachtenberger: That's what Bucky called it after enough joints, I think. So you've got this aristocratic tutoring model, when we were trying to kill feudalism because it was so unequal and gruesome and create democracy, all the vestigial remnants of feudalism were things you wanted to get rid of. And when you had a situation where almost nobody got a good education because, of course, they were just going to be laborers or whatever, they got some basic guild knowledge of how to do a labor thing, maybe. And of course the nobility and then the aristocracy got to pay patronage jobs of the best thinkers to tutor their kids. It makes sense why you'd have such a radically unequal distribution, but you'd also get a certain kind of super genius born out of that privilege. But the thing is, that's not scalable. And you were just mentioning earlier, will these deep fakes be able to share all of Wikipedia's knowledge or all of the knowledge on whatever chemistry, formal logic, but also in the voice of whomever.

So now you imagine the positive version of the AI deep fakes metaverse world, where I go to the academy and I can ask Einstein and Von Neiman and Kurt Girdle to all come sit with me and have a real time dialogue about formal logic where all of them are both speaking with the kinds of semantic coherence to everything they ever wrote that they would have. While simultaneously having access to all knowledge of formal logic that has even been learned since they died. And their ability to model my theory of mind and pedagogically regulate how they communicate to my level of understanding. Well, that's pretty fucking phenomenal from a-

Jamie Wheal: That's a utechno utopian left hand, but I'll take it. Let's keep going.

Daniel Schmachtenberger: Now, I think we're only a single digit number of years away from being able to build this. And I think right now we're most likely building because of market incentive, dystopic fucked versions of this thing rather than the awesome versions could be. I think we could do the awesome version. Now, I also think that corresponding with that, the population lowering, the reason why we want to raise the population has to do with trying to keep up economic output of the people, whatever, but of course that's not factoring the technological automation is going to obsolete most of the jobs. Both robotic and AI automation are going to obsolete most of the jobs. So then we have to answer how do we rebuild an economy? And how do we rebuild an education? What is even the role of education? What's the role of humans in a world where AI and robots are better than us at most things?

Well, what are we still here for? What are we uniquely good at and what are we intrinsically incented to do? And what is intrinsically meaningful? Now has to be the core question to redesign fucking everything cause we're no longer the best at doing most of those things. And so it's not just the running out of oil and stuff. It's also the becoming fundamentally obsolete in lots of things, but also obsoleted in the shit that mostly we don't like doing. And so then we say, okay, well let's assume that it's not pure AGI, but it is very smart ability to do computational kinds of stuff, AI, and robotics, but fundamental phenomenology experience, human connection is still both intrinsically meaningful to us and we have unique capacity. Well then knowing how to interface with the robot AI type dynamics and how to do much more in the domains of what is uniquely human.

And we don't have to prepare people for the workforce so we can rebuild an economy from scratch that doesn't say humans have to work to get provisioned because that was only what we did when we needed the humans to do the jobs. And you don't need the humans to do the jobs. You can also make it to where the humans don't need the jobs. And you have a different kind of common wealth stewardship that also works better with ecological boundaries. I'm almost there, I'm going to converge and close a couple things. And then now what my AI tutors are giving me has more knowledge than Marcus Aurelias' tutors could have ever had, but they don't love me. They don't actually really care. I don't love them. And there are differences between the artificial intelligence and natural intelligence. So now my live tutor, my life teacher is also talking with me. And I just had the conversation with Einstein or Jefferson or whoever it is.

And they're asking me, what do you think is different from what that AI Einstein said and what the real Einstein might say if he was alive today or might have said back at that time? So now we have to start to think about the way minds are contextualized in history and the difference between natural intelligence and artificial intelligence. And the difference between hydrocarbon self-organizing systems and Silicon systems. And what is intelligence itself? And what is the relationship of intelligence to sentience to life to consciousness? So now the human teacher's oriented to those very, very deep theory of mind questions that once humans start asking, they go recursive on knowledge processing, on intelligence and meaning itself. And you get these kind of exponential curves. And can we have way more people being teachers and way more developed in that system? Because we automated a lot of the things people already currently do.

So can I have 10 times as many teachers who all get PhD level training, as opposed to whatever current level of training, they get. Starting with kids, so the kids get mathematical geniuses. They both get a study with the Einsteins and Girdles and whatever, but they also get a study with people who have that level of capacity. And similar in the way that the Da Vinci machine is already starting to obsolete surgeons because an AI robotic surgeon's just going to not have their hand shake. So the doctors of the future, the doctor nurse role starts to merge more to who can do real human connection with the patient and interface with the AI robotic type systems and do a better job at various specialist tasks. Does that mean that we can develop people that have way more human connection a lot more? So we have a much higher percentage of teachers, parents, nurses, high connection-oriented roles, way more developed. And way more AI augmentation of how we develop people to where the Einstein and the Dalai Lama become the median of human development.

Jamie Wheal: Yeah. I mean, to me on the conversations of like universal basic income and those kind of things is like the question is always, well, what an earth would we do? We would be pointless or purposeless if we didn't have our work, which is very Max Weber Protestant work ethic kind of cultural conditioning. And my first sense would be like holy shit, we would go back into childcare and elder care and art and music and gardening and community and spirituality. Like all the amazing human things that humans have always cared about. And meanwhile, we are outsourcing them to bottom of the barrel minimum wage workers instead of making them central to our shared human experience. So that tracks really closely with what you're suggesting.

Daniel Schmachtenberger:  So do I think that already the state cannot check the market if the people aren't checking the state. Do we have to rebuild the state to make that possible? Cause it's impossible right now. Yes. We have to rebuild the state and the market and info processing and education and a bunch of things. But can we already have the motivated tiny subset of people listening to things like this, say, you know what, I'm going to not default to the lazy certainty that everything is fucked because that's just lazy. There's a trillion possible solutions I don't even know of. And in fact, the people who I'm listening to, who've thought about this more than me, aren't hopeless. So maybe it's fucking dumb for me to go hopeless if the people who know the problem space more than me, aren't hopeless. Right? And I'm also not going to default to it's not my job, I can't do anything about it, it's too hard to understand.

I'm going to work to understand it and work to see what is mine to apply. And I'm going to work to find other people that are doing that. And I'm going to work to try to be able to rebuild something like bottom up governance of people who are really making sense of things. So I want to see people doing that. And I also think that we have the capacity to be developing people with those capacities at scale. Now there's a boot loading issue. There's some change make to infrastructure and some changes to social structure and some to super structure that all affect each other. You take more time to go into how I see that boot loading looking. But when you're asking why do I have hope? I was just wanting to play with a couple examples of how the same tech that us can actually radically enable us.

But the current market dynamics that are guiding it don't build that version. They build the Facebook AI, not the... It would be so easy for Facebook to do Audrey Tang's unlikely consensus or Twitter and upregulate all the things that a super majority would agree upon. And everybody gets to see what are all the things a super majority would agree upon. And you run a third party candidate that says anything a super majority agrees upon. I'm going to run. Fuck that would just start to make amazing changes that quickly. So it's the tech can do different things-

Jamie Wheal: And Audrey Tang's is liquid democracy, Taiwan. So for anybody's looking to track her down. Yeah.

Daniel Schmachtenberger: So I think that the tech that creates so much power under the current extractive and coordination failure dynamics leads to self termination, but it also enables fundamentally new possibilities that previous social and political theorists could not have thought of because they weren't possible. We could not do planetary wide coordination without the tools to be able to do that. So yeah, I do think that if I take all of the failure modes that I know of, all the self terminating things. I take the timelines associated and then I say, is there any path through this that is both feasible with physics? It doesn't require replicators and di lithium crystals and stuff. It's like feasible with physics and it's feasible with human nature. And assuming we condition people differently, but I don't have to assume that everyone has an IQ of 2000 and are Omni benevolent to begin with.

Yes, I absolutely believe there are models that are adequate, that are both possible with physics and possible with human nature. I believe they're threading the eye of a needle. I believe there are way more ways it fails but I believe that the likelihood for it to succeed is the people who are oriented to how we make it through the eye of the needle and dedicate their whole lives not to asking the question do we or not, but how does my life increase the likelihood that it does.

Jamie Wheal: Hmm. Beautiful. So yeah, that notion of leaving space for grace. The idea that there is always that miracle element and Kevin Kelly has that beautiful quote where he is like, it's way easier to imagine the double than God because of basically second law of thermodynamics.

Daniel Schmachtenberger: Yeah, exactly.

Jamie Wheal: Everything awesome from the flowers to you and me are highly improbable and we have to get better at believing in the improbable.

Daniel Schmachtenberger: Yeah. And feeling it even before we have a thing to believe in. And then the other thing is I don't believe that humanity's going to make it. I also don't believe it's not going to make it. I know either could happen. And my disposition is not just because I'm sure we're going to make it so I have that certainty. It's that I'm sure that I want my life oriented to what could help it happen. And I know that everyone who's sure that we aren't has unwarranted certainty.

Jamie Wheal: And disconnection from life force.

Daniel Schmachtenberger: Yes. Yes. And then so the other thing is I want my life oriented to the best possible future, but I also want it oriented to the most honoring of life now. So if we don't make it, I still wouldn't change any of how I'm living because it is coming from the like recognition of the radical amazing that I get to be alive for a second at all. I had this experience, this is so fundamental, is like, I remember when I was a teenager looking at a sunset and feeling this experience of like, I would incarnate this life and I'd go through every fucking hell and difficulty for this moment because it's so, as opposed to no experience at all ever, it's so incredible. And then I'm like, I've already had thousands of moments like that. And if I've had thousands of moments that are worth an entire life, it's all gravy from here in terms of what's in it for me.

Right? Like it just keeps being this embarrassment of riches that the beauty of life keeps happening and it's all worth it. And that doesn't mean the pain isn't painful, it just means I've experienced pains that were excruciating and I can't feel them right now. And I don't choose to dwell on them. And I've experienced beauties that were all inspiring and I do dwell on them. And so it's worth it. And so there's also like if we don't make it, I don't just want to be partying in the Bahamas. I want to be in sacred service to life because it's the only thing that actually makes sense.

Jamie Wheal: Yeah. Beautiful. Well, I mean, and that feels very resonant with the Arjuna Krishna story in the Bhagavad story. So some reconciliation with the impossibility of this thing and also that we're never off the hook for planning our pot to the hill.

Daniel Schmachtenberger:  And Dharma is not a utilitarian ethic. It's a virtue ethic. The utilitarian ethics says I can calculate the possible future states and choose an action based on the best utilitarian outcome. Except you and I both know that we can't know what the wind is going to be 10 days from now because fucking complex systems are UN unpredictable. And so I want to do utilitarian calculus where I can, but I also want to say if I can't predict the future, each moment is unfolding from this moment. And there's a trust that if I'm in the deepest integrity in this moment, the future that unfolds from that is the one that I want. And so it's fundamentally different. It's not based on future outcome. It's based on a trust of what unfold unfolds from the way this moment is lived. And the [inaudible 02:37:17] is just like, the whole thing is understanding that Dharma, let go of the fruits of action and do what is right because you know it's right.

Jamie Wheal: Yeah. Which also feels like why Yoda can never call like eight ball corner pocket. He's always like, I'm feeling the force, but like there's perturbances, there's things. But it is a dynamic flow yeah of a complex system. Right. So yeah.

Daniel Schmachtenberger: Cause Yoda isn't God, right? We didn't violate idol worship. It's not this thing is all determined. It's still there's great mystery. And I don't know. And the wisdom is not I know at all. The wisdom is I have a beautiful relationship with the unknowability and still how to live in the presence of that.

Jamie Wheal: Yeah. And even the story that, whether it's Yoda's agnosticism, even though he's jacked in and total gangster, he still cannot call the shot. But the one that, yeah I may noodle on this more, but for me, it's the Lord of the Rings and just the genius of the ending. That Golem is the enemy and he is corrupt and you think, oh, is he going to be saveable? Is he going to come back to Schmiegel? And he does that whole schizophrenic, Chinatown back and forth. Right? And it's not that he's redeemed in the end. And Sam wants to kill him and Gandolph's like, hold man. We don't know what part he has yet to play. And then in the end, it's not that he's redeemed and comes over to team good guy.

He actually descends into his most corrupted self. He folds where he has folded many times before, bites the ring off Frodo's finger and that is how it ends up in Mount Doom. And destroyed. And that's what I mean about the space for grace. It's we don't know. And I feel like you're echoing that really strongly. We cannot, with certainty, predict what happens next. And if I'm hearing you, right, what you're saying is the only thing I can do is act from virtue, act from love, act from defensive, and bearing witness to life in this moment and the next moment and the next. And perhaps if there is a path to salvation, if there is a path to redemption, it lies that way. And we cannot see it from here. But if we keep the faith in it, that is essential for us ever arriving.

Daniel Schmachtenberger:  Yeah. Yeah, totally. And when the fellowship of the ring came together, they knew the test was most likely impossible and they didn't know exactly how it unfold. And there was also wisdom that guided that it was the right thing to do and needed to happen.

Jamie Wheal: Yeah. All right. So before I fuck off and sail west over the ocean at the end of middle earth, I will hang out with you optimistic and rosy one.

Daniel Schmachtenberger:  It's weird to be... this is interesting. Nate and I talked about this recently, where a lot of the people that I, not everybody, I don't want to make any generalizations that are too sweeping, but some people get into being very good at like thinking about catastrophic risk because their disposition is that they catastrophize. Like in the CBT sense of like running worst case shit all the time in their own personal life. But then they're smart and they end up the good side, the gift of that is that they're able to think about how could this tech really mess stuff up and how could this agreement get fucked up and how could this be gamed? But that's the gift of it, but that disposition is not oriented to find solutions, it's oriented to find what's wrong with any possible solution.

I was talking with a different friend the other day and I'm like look, every time I tell you a new possible catastrophic risk, you without a lot of rigorous analysis, you just believe me that that's probably going to happen. And every time I tell you a possible solution, you're like what's wrong with that? What's wrong with that? Until you find something possibly wrong with it. That's a bias you're running. Cause why don't you just as rigorously analyze the catastrophic risk of why that probably won't happen? And if one is oriented to catastrophize, they won't coordinate with other people for shit because they'll catastrophize the relationships and assume that everyone is fucking them and just doing it for game theoretic power, just virtue signaling and everything else. And so you actually have to overcome catastrophizing in one's own self to actually be in service to catastrophe as well to be able to think through them and yet be oriented to solutions and oriented to collaborate with other people. So it's weird to be in a place of like, no, we really want to understand all the things that are fucked so we can address them, but it is so we can address them. And that's both so we, not so I. And the address rather than just leave them.

Jamie Wheal: Hmm. Takes a village to save a planet.

Daniel Schmachtenberger: This was fun. Jamie, this was really fun today.

Jamie Wheal: Yeah. Well listen, so for everybody listening you probably don't need any introductions or reminder to Daniel and his body of work and his relentlessly goodhearted contribution to finding ways forwards. But I would super encourage everybody that we will include in the show notes, a bunch of the different references that you made potentially to that tutoring paper. You had mentioned another one towards the end. We can kind of go back and-

Daniel Schmachtenberger: I'll send all the ones I can remember.

Jamie Wheal: That sounds great. And then just an encouragement that out of all of this complexity, the simplicity on the other side is for each of us and all of us to find what's ours to do and get cracking. And share the triumphs and disasters. Not just post wins, but also seek solace and comfort so that we can keep on keeping on together.

Daniel Schmachtenberger:  And get super dubious of your own certainty. Talk more to people that think really differently than you, and really try to understand both what they're saying rationally but also feel the internal state of the trueness of that they feel. And while decreasing your own certainty, increase your own rigor and work towards what you would need for effectiveness.

Jamie Wheal: Beautiful well, Daniel Schmachtenberger, Conciliance Project, thank you for your time my man.

Daniel Schmachtenberger:  Love you, Jamie. Good talking to you.

Jamie Wheal: Cheers.

No Comments Yet

Sign in or Register to Comment