Will AI Replace All Human Jobs? - Anders Indset | ATC #571

In this impactful episode of Around The Coin, host Stephen Sargeant engages in a deep conversation with Anders Indset, a visionary tech entrepreneur, investor, and bestselling author. Anders "The Business Philosopher," is a Norwegian-born writer, deep-tech investor, and former elite athlete. Named a top global thinker by Thinkers50, he's the bestselling author of The Quantum Economy and The Singularity Paradox. As founder of the Njordis Group and GILT, he advises global CEOs and political leaders on the future of leadership, technology, and AI.

Host: Stephen Sargeant

Guests: Seth Rosenberg

We are also available via:

BuzzsproutYouTubeQuoraMediumXFacebookLinkedInSoundcloudApple PodcastSpotify Player FM

Check out http://deepca.st/around-the-coin on DeepCast to delve into episode transcripts, key insights, discussed topics, and more!

Episode Transcript

Stephen: Around the Coin podcast, this is one of those episodes that only come around once in a decade. Maybe once in a century, I'm not sure, but this is such an impactful episode. We have Anders Inset. He's a visionary, he's a tech entrepreneur, he's an investor, and he really goes deep about AI and how it's going to impact humanity.

He has all these theories from multiple books, including Wild Knowledge. Oh, did I mention he is a bestselling author as well, and he is been talking about organizational structures and behavior to some of the biggest Fortune 500 companies around the world. This is a perfect discussion if you are building in any space, but especially in payments, tech and crypto 'cause we really go deep in AI and how it's going to impact our everyday workflow as well as the careers and jobs. And some of these things may surprise you about the jobs that AI are gonna take out First. This is a beautiful episode, and here's a little secret. If you listen to the last three minutes of this podcast, it may change your business and your leadership skills forever.

Hope you enjoy this episode.

Stephen: This is your host, Stephen Sargeant. We have a philosophical episode today, but we're still talking about tech. We're gonna be talking about ai, we're gonna be talking about the future of, you know, finance, the future of the way we think with Anders Inset Anders, you have all these titles. You're a philosopher, a deep tech investor.

You're obviously a bestselling author. Many people probably recognize you as soon as you come on video. When's the last time you had like a nine to five job? Take us back to maybe your early beginnings of nine to five, and what brought you onto this path of really focusing on, you know, the vision that you see, not just yourself, but society in the future.

Andres: Well, thank you Stephen. I mean, to me, that was never a reality. I started out in a small village in Norway a a mining town with a lot of history and culture. World heritage listed, peacefully listed, close to the Swedish border, very authentic and cold a three and a half thousand people. And I said to my dad at age 12, I'm no never going to work for someone in my life. I wanna do my own thing. And I've kept that. So basically. I didn't really have that structured. I've had companies, I've built organizations and companies, but I've always been a very passionate worker to the extent that I like to do stuff.

So, although I've been sucked into project structures and meetings that go by the regular hours I couldn't really say that I ever had. You know, a nine to five job. I've always been at it to learn and, and want to ponder, you know, things that I am interested in. So, yeah, I think I, I can't think of one.

I mean, even when I had my companies, I also played sports. So there is, in Europe, there is something called Olympic handball or team handball. Very popular sport in Germany in particular. Also Olympic sport. And I played here in Germany where I live now professionally. So I had an agency and I played professional sport.

So I had basically the double luxury of working during the day and during the evenings and on the weekends. So, we had the games. So I guess that's been a part of my life ever since I was a very young kid. So, yeah, no, nine to five for me too.

Stephen: No nine to five ever. That's awesome. When you look at like Germany and you know, some of the Nordic countries compared to the US where a lot of our listeners and tech founders and you know, Palo Alto and you know, New York, how do you compare like the European entrepreneurialship culture versus like what you've seen maybe in the west?

Andres: That's a very good question. And I think we look back over the past decades, Europe has missed out on, you know, a lot of these trends and waves we're not being good at anticipating future technological progress and understanding the implications of exponential growth in technology. So we haven't been able to build business models and, and this is twofold.

I mean, one is the bare understanding of the technology. The other is also access to scale capital. So there's a lot of good science and good educations and good research done in Europe. And now maybe looking forward we are theoretically at least in a better position to go into health and to also scale organizations in deep tech.

But in the past it was more of reaction. So we copied business models. Starting all the way back in the early days with e-commerce and taking some of these online shops and just replicating them for the European market the whole cloud computing wave. The same has been, you know, true for ai that we just try to adapt and react, which I think is a very unhealthy state for the economies to be in.

Europe has been very you know, bureaucratic a lot of regulations. So it's been slower than the mindset that you have in the valley in the Bay Area in general. So in New York, obviously. And also if you look at China now, it's a different mentality. China is very hardworking society, whereas in Europe in Nordics in particular, we have learned to enjoy the good life of going to the cabin one or two extra days a week and have been in a, I would say like a, a beauty sleep over the past, like slumbering along based on the previous successes of the industrial export and, you know, in particular from Germany, the hidden champions and all these industry companies that were licensed as to print money just by growing and expanding to new markets.

So I think it's a good, now it's a very interesting phase because with the new administration in the us one thing that the president has done, he has, accidentally woken a very strong giant on research and education. So Europe has been peaked away from their beauty sleep and are now starting to realize that we should work again and we should be a part of this you know, race into the future.

And at least when you look at it economically speaking, I think, I don't think that's a good idea for the US because Europe was, was just, you know, going along. And now I think things are starting to happen. So I'm very excited to see that in particular in startups and entrepreneurship, that things are rolling now and there are new ideas coming.

But with that have been said there is still long way to have the same spirit. So I think, I still think US has a much more, you know, hands-on entrepreneurial spirit and is further on anticipating future t trends still.

Stephen: I'm curious, 'cause you're talking to, you know, CEO of billion dollar companies, you're also talking to students you know, that are looking on their lives path of where they want to go career wise. What do you think? Because I, you know, I see China and yes, they have a very hardcore culture and they have the very long term goals.

But you know, a lot of the Asian, you know, friends that I've had, they've all, they're also very insecure 'cause they've been like, you know, they had to do certain things. They didn't have a chance to live the good life that you said, where do you find the balance between not being in a beauty sleep or being so militant that, you know, the person grows up to be what might be insecure or just like so in fear of, you know, results oriented that they haven't gotten a chance to, you know, live their life to maybe some of their European peers?

Andres: Yeah, so, so first of all, I don't necessarily think that having that state of mind of that comfort zone leads to a lot of happiness as such. So, when you have millions of, of, of purpose seekers you know, wandering around being reactive, that makes you very tired and insecure and worn out and lazy and depressed, that, so, I mean, you see that in Nordics and you see that also in Europe, that the mental state has dropped.

And, and, and this is in particular because of, I think that people aren't active. And when I say active. That they do something where they have a sense of fulfilling where they also have the agency to fill life in order to have a fulfilled life. You do a work, you, achieve something and you move on.

So I wouldn't necessarily agree that, and like if you look at China, there was an ipso study that showed that Chinese people are so much more happy and that could literally be because they are, you know, working 10 hours a day and have a, you know, structure in their life. Various, the freedom to, and in particular, I think that after COVID, when everyone went from, you know, working in the office to that freedom of having a home office, which is for many a terrible terrible outcome, they thought it was good.

So I think having that structure and that pressure isn't necessarily a bad thing. So, so that's, that's just something that I'm, I'm very interested in. And I also spend a lot of time thinking about, you know, how can we activate people? Because we are kind of, sort of as like, philosophical zombies just wandering around and reacting to impulses and with the complexity that we have created and basically, you know, fueled by the social media mechanism of thumbs up, thumbs down, you know, the instant reaction that leads to a incentive.

So the economical driving force is a quick reaction that's not healthy for human beings. This is what technology is really good at. And, you know, the reflection or the reasoning and to ponder problems, that is something that we then lose. And, and that is, I think, a big shift that we have to see with the technology, change our educational models.

And I think this holds to be true for. Most advanced countries, but also on a global level in general, that the educational models that took us here will not take us there. And that needs a radical shift from the knowledge workers and the knowledge society that we created that were stuck in their own self evident truths about the world.

And they could become experts and they could build something on top of that. So I think this is interesting also because if you look at Asia and China in particular, we have been sitting in Europe saying how like. How, how terrible it is for them, how they build their economy. And they use all these old industries and energy forms and they pollute and all of a sudden they're the leaders.

So I wouldn't be surprised if, you know, five or 10 years down the road they are carbon neutral or have green energy sources and they only, and they also at the same time have the only profitable business models. So I think the efficiency race in the technology, the progress of technology is very, very important for a stable economy and a stable economy or functioning and economy is the foundation of a functioning society.

So the economy is the operating system of society today, and I think there's a lot of, you know, work to be done here in the US and also in Europe. In order to to, to have that as a stabilizing factor.

Stephen: You know what's interesting? You mentioned knowledge a couple times, and I know you know through some of your books you have this immortalized concept of wild knowledge which is an interesting definition in your books. Can you explain maybe the difference between knowledge and wisdom? Especially we have a lot of founders that listen to this podcast.

I think they may get confused about the definitions of either of these or use the words interchangeably. Can you kind of come with it from your own perspective, the definition of these two concepts?

Andres: Sure. So I mean, when you refer to Wild Knowledge, I wrote a book that probably like 10 years ago, basically about the concept of now tamed information. So things that we hold to be true, that it's validated. And if you look at this today, I. I would be much more radical in the definition because we had an information society that was kind of sort of chaotic and then we came up with, you know, AI and the concept of large language model that we are not starting to get time to get tired of because it's boring.

We wanna do the next big thing. But it just shows us how insanely good technology is structuring information and providing that in a way that we can define that as an absolutism based on our best explanation today. So what we will get out of that is that we will function of, of these mechanisms. So you could envision like a politician speaking and in real time, all things that is coming out of his or her mouth is validated in real time.

And there's an absolute like definition of that thing. If that is wrong, it'll be error corrected. So a machine only does the same mistake once, right? If we see it as a problem or a mistake. And this is the knowledge aspect. So the knowledge workers, the knowledge society was built on expertise that we had these people that knew something and were very good in the field so they can categorize.

And I think this is what we see today, that knowledge becomes a commodity or intelligence. So we have. Today already, almost an infinite free access to intelligence, or call it knowledge. To that extent, we could always take the pondering of what knowledge is from a philosophical standpoint, what we really can know.

So if you look at that definition, there's a slightly nuanced way to take this argument because we could also argue that we cannot really know something because everything that we. Built as knowledge always has an assumption at the foundational level, right? So I always say, if you play with knowledge, you best know your assumptions.

And, and the first, you know, first one could be this is real or there is a physical reality, the perception of that. So we know there are questions in quantum physics and the relationship between, you know, Newtonian physics and quantum physics that hasn't been figured out and it's not complete or maybe even slightly wrong, but still we can build amazing things on top of that.

And that is what we define as knowledge. There are definitions in philosophy like to just justify true beliefs that there has to be justified, it has to be seen to be true, and it had to be a part of a belief system. And so, so this is the, the knowledge aspect of that. And then there is a will to truth.

So there's a strive for a better explanation. There is a deeper understanding of how things are tied together. You are aware of it. So the society that we had was based on, you know, having a, a optimization so we have better information. So we had a reactive society, as I said, that, you know, was all about getting knowledge solution, fixed solution was the driving force of progress.

And I think this is incorrect because I don't think we solve problems. We just create better problems and having wisdom. And the ability to reason is to have a, for my first principle, thinking a better understanding of the problem. And I think this is what entrepreneurs and, and, and people in startup have to distinguish between making quick money and selling or exiting to just to make quick cash because they think they have a set product.

And you will see that a lot in AI where you think you have a great idea and you have knowledge about a problem, and you then you realize that, you know the feature that you have built on top on an ai that is something that Google or someone can roll out overnight and your market is literally dead regardless of you having raised 200 million or whatever.

So this wisdom and understanding is basically the reasoning that you can do on a human level is to drive for that will to truth. You seek the, the, the, the complexity and you seek a first principle of thinking argument on how to define a problem. And then you look at how you can make it. Better to have progress, and I think infinite progress is possible for humanity.

So one is maximizing the art of being right. That's the optimization game. And the other is maximizing or enjoying the art of being wrong and seeing the art of being wrong as an art form where the foundation is progress and not absolutisms. That is basically philosophy, the strive for wisdom. So that are like those two categories was where what I started doing back then in Wild Knowledge is more of a startup, you know, entrepreneurship book.

But that has, you know, evolved into deeper philosophical understanding on what we need to do in this time of very rapid change within technology and how this all relates to artificial intelligence or artificial general intelligence later. Those are very fundamental things for the future of humanity and we need to need to work on that art of being wrong.

The philosophical contemplation, I think that's, you know, fundamental for us to extend our you know, time here as an organized human species on, on, on planet Earth. So I don't know if that makes sense to you.

Stephen: I have like so many thoughts based on what you said. I don't even want to interrupt you, but I have to understand you talked about, you know, knowledge and progress and doing. You know, I've worked with a lot of different, you know, personal and performance coaches and they talk about this knowing, doing gap, right?

Are we too much about the knowledge? And we're not about executing because I feel like we have all the knowledge, but we're still reading every book, listening to every podcast, we're going to every conference. We're almost knowledge zombies where we're just intaking the information, but we spend very LA little time implementing it or go doing it.

What are your thoughts around that? Do I have that wrong, or do you see that We're seeking so much knowledge. We're spending so many money on courses and programs and, but we're not doing anything. It almost feels like the gathering of knowledge is the work, is the doing. What are your thoughts, especially from an entrepreneurial lens?

Andres: Yeah, no, I think, I think there, there is a part of that that I resonate with. But I think, you know, also we are gathering information, which is again, not knowledge. So we are just, you know, spitting out things that we have picked up without even understanding the conceptual part of that. And that is a part of the knowledge.

And AI does that to a much better extent than, than, than we do. And I think that that information drive to have a very, very simplistic information that you have gathered and then to, to, to, to, to think that this is something that, you know I think that's a, a big problem in today's society.

And that goes more along the reactive part. And that's why we have a lot of division, because we are not open. To changing our minds. We're not open to learning. We are looking to be important, to have followers, to have reactions, to have economy. And I think this is a problem with entrepreneurs also, that you are so stuck in your, you know, category and you don't even get the full grasp of that idea and, and that concept.

And so yeah, I think we have, you know, to some extent young people have a lot of things that my generation didn't have growing up. But on the other hand, there is also a very important aspect of this, and that's to understand how things are you know, interrelated. And, and I think when, when we look at that from a, like a, a very I would say like a society aspect, we, we, we had, we are not looking for, you know, experts.

I think this is the era of generalists. So you will have domain experts, you will have the messy, the Ronaldo's, you know the superstars within the domain, but only those are needed. So you don't need coders that can program and code on a good level. You need superstar coders and the rest will be technology.

So I think, you know, we are more in an era of generalists emerging, and I think that is where young people should also emphasize and focus. Because everything that we don't know to a deeper extent, we can enhance with technology and therefore grow in various fields and domains. And I think that's much more room for that than there is from a.

Quote unquote good expert today. And moving on, I think this is also something that we will see very radically. So you see it also in the business world. You see the whole management layer, you know, managers were experts on, you know, managing people and having processes and structures that learn outta business school. Management today is technology, right? So that's why many companies are laying off management. We need more leadership. We need new leadership and that requires different skills. And I say I think the same will happen for the workers in the organization. And I think the whole organizational structure will change how companies are built.

So we are up for some very interesting interesting times ahead when it comes to, you know, understanding business and, and then these concepts of the past.

Stephen: You know, it's funny that you mentioned that I think there was Anthropic, CEO that just said like, 50% of entry level jobs are gonna be taken away by ai. And, but you and I had another interview previously where they talked about, you know, everyone's worried about the entry level jobs. AI's coming for that senior manager role.

You know, the person that's coming up with the reports and making decisions across the globe. Those are the ones that AI does very, very well. So what are your thoughts? You just mentioned organizational structure. Any thoughts about entry level jobs being completely decimated by ai and is that a good thing?

Like a lot of people in those entry level jobs aren't happy working on those jobs, so this gives them an opportunity to learn new skills, try out new things. What are your thoughts around that?

Andres: Yeah. So I, I'm not gonna be a dystopian here. I, I'm not, you know, I'm not a naive optimist. I'm not part of the EAC movement and accelerationism that everything will be perfect for technology. Nor am I definitely in the field of negativism. I consider myself a possible list and possible is a deeper understanding of what we could do, which does not necessarily mean that we should do it or nor nor that, or the, the will do it that it will happen.

But we could theoretically build a very good, prosperous future. And AI is amazing and would help us do that, but it's not given that we will. So, I'm actually very worried about jobs in general because there is nothing, and I mean literally nothing that I think a computer cannot do better than a human being.

We always have and romanticized about the human part and the human skills and all these things, and I always then start in a debate to say, what, what do you mean when you see focus on the human part? You know, what is that? And things come up like empathy and, you know,

Stephen: That was gonna be the first word I said. They're gonna give. They're gonna hit you with the empathy and the emotional intelligence,

Andres: And I say, well, yeah, that's probably some lack of deeper understanding of how technology will function and empathy, you know, empathy to me is not a soft skill. It's a hard skill. Right. And the question is also if that is what we strive for or more of a rational compassion. There's a very interesting book on the topic by, I forget his name, bloom Against Empathy where you talk about like the difference in how the mind function and yes.

We have some physical reaction to the emotion. So we cry, we turn red, we have sweat, whatever. But in a robot and a machine would not have that. But it's not given that the processes of having eyes or robots do the same, that that should be any different. You know, if a robot kept coming back with the same stupid answer all the time, he should get annoyed of that and he should ponder a new direction.

And I think robots will do that and the AI can and will do that in, in, in near future. And, you know, you could have. Those reflection that are categorized as emotions. But I think, you know what, where I ponder off into is, will there be a sentient conscious, you know, will there be someone experiencing that emotion?

Is that something that we'll build in a machine? And here we could get into the details here, but here I am of the opinion and I romanticize the human being to say that there is something about. Us that I hope is different from being just a quantum machine or a computer or a robot that we'll figure out.

And I, I hope that, that we will have that and, and I romanticize about the potential future where we can ask a question that no one has answered and there is a perceiver of that, you know, response from the enhancement of AI or whatever, that someone can experience progress. So the conscious experience of your own experience, I think that's all we have, every function of what it means to be a human being.

Anything we do you know, cognitively or, or physically, I think machines can be much better than that to us. And dentists. The question then becomes, should we? Right? So just because we can doesn't mean we should. You know, the first, the first part of, of the Industrial Revolution was all about, you know, building machines and, and optimizing.

And we made mistakes and we ever corrected. And we have always been driving and pushing, you know, for progress. What happens if everything you can envision or imagine is doable or has already been done, then the responsibility is much bigger because the things that then will cause a lot of harm, we don't have a possibility to error.

Correct. So I think, you know, the future the past was about is it achievable? The future is more about what kind of future is worth striving for, what do we want technology to do? And if we hand over that decision to machines, then I think it's a very dystopian future. And coming back to your questions on the jobs.

I think I've resonated in that in that area now in direction. I don't think there is any job that is safe of technology if there is, you know, an incentive economically so to optimize it and to do it, because I think eventually these machines will be so powerful that they can do literally anything.

Stephen: I think what you bring up, like empathy and the mo, it's not like all of us are rocking around like these empathetic, emotionally intelligent people that are just all caring. A lot of it's driven by behavior. I know Alex or Moey talks about like. When you tell somebody, don't be a dick, it's not because like, well what does that mean?

What are the specific things that they're doing that like that forces you to call them or describe them in that way? It's like, well, they stop. You know? They keep on interrupting me. Okay, so you would tell the person, stop interrupting or let the person complete their thought, like, these are the things that you're saying that we could literally teach robots to do and then eventually all those things that we consider emotional intelligence are really just a series of behaviors.

Like, do not turn your head or do not look at your phone when someone else is talking to you. These are all emotional things I think we consider like empathetic or emotional intelligence, but it's really just a series of actions that if a robot is just sitting there looking at you while you're discuss, you know, talking, waiting for you to finish repeating what you said and asking you to explain it in or elaborate even more, we're gonna get that.

If robots are gonna get the emotional intelligence, they'll probably be better at than most human beings. Is that a fair assessment?

Andres: Yeah, if they want to. I mean, if once we start, you know, talking bullshit and the robot's gonna get like bored, you know, this is wasting my time. If the, the robot has a goal of efficiency, you know, why care about the stupid humans, right? So, so there are many things that could play with here, but yeah, in general, I think you're, you're right.

And, and, but, but again, you know, coming back to, to, to empathy. So if you are feeling bad, it's not so much about me stepping into your shoes and having that feeling right. It's about you feeling better. And and if, if I feel bad and you feel bad, we have double minus. So it's not good for any of us. And and if, if, if you have a ro a robot that gives you goosebumps, you know, you can't goose it away.

It's real. The emotion on your end is real. You're, you have that, you know. So I think what we'll see is that robots will become. Increasingly good at what we would consider empathy. And then the question will become, can we distinguish between a robot and a human being? Is that something that we will then at all feel or we, we just feel better because this, as you said, robot act as like as expected of us so that we, we like it and we find it good and it helps us, chairs us up, whatever.

And I've had that many times in the past where I have met people of various spiritual practices or people that have had traumas or had a terrible childhood or gone through difficult times in life be it drugs or alcohol, whatever. And they come up to me and they, they, they say, they feel me, and they have an energetic, some energy relation.

And they say to me that I'm very empathic and, and I'm just tired and worn out having spoken in front of hundreds of thousands of people come down from stage and I go talk to them. And I just say the things that I would like, say in a polite way. And I'm, I'm, you know, I'm there. I'm always there still.

Until I leave, I'm a hundred percent tuned in. But would I consider that something like empathic or something energetic? No, I, I, I see it as cognition. So, you know, it might be my, my, I'm born with the initials ai, so I might be an ai, so there might be like the lack of that in me. But I, I think it's very, very much about cognition and processing data.

And as long as the recipient feels that it doesn't matter what I feel. Right. And I think this is the argument that we will have for humanoid robots and machines in future. We are the fielders. We are the ones that, you know, have an experience, a conscious or a quality about what it feels like to be something.

Will robots have that? I don't know. I think none of us know, really knows. I think, could potentially find ways. I don't see them today necessarily, but on the other hand, I think the robots can do the exact same thing without that. And that's obviously our challenge when we are gonna build super intelligence, is to understand the implications of that if we do it in an externalized device.

Stephen: You know, before we go on to your new book, which talks exactly about bridging the gap between humanity and ai, the singularity paradox. I wanna touch on one point where you mentioned the art of being wrong. I feel like we're pointing. We're in a society where we're pointing a lot of fingers. We're blaming the government, we're blaming Trump, we're blaming Elon, we're blaming our neighbor.

We're blaming this religion, this country, this ethnicity, this race, this gender. What do we have to do to kind of turn things around? Because I felt like, you know, although September 11th was terrible, it brought some unity to a country instantaneously. Didn't matter what creed, what team you were voting for, there was some unity in the United States that has never been seen before.

And then like going all the way back now or going all the way forward to January 6th, it feels like we've lost that. You mentioned the word divisive. It feels like we lost that. I don't wanna talk about United States specifically, but what do you feel that we need to do to bring us back into unison and stop pointing F fingers and maybe look at ourselves, right?

Like, you know, self excellence is probably the best rebellion in this world. What are your thoughts?

Andres: Yeah, I think that's a very good statement. And I mean, if you're gonna trust other, it very often start with self trusts. If you trust yourself, you know, then you can trust others. And if we trust others, we can work with others and we can have a relationship. And if we have base trust, we can have friction.

We can generate different opinions, we can talk about 'em. If we have friction and trust, then we can have progress, right? Once you know the trust is lacking, then it becomes that blame game or that winner takes all scenario. And then we are in a time now where technology is so infinitely.

Potentially harming that we need to work together. There is a concept in Norway, it's called Duke. It's like a voluntary work where you just put in the effort for others and you uplift others. And if you have an aspiration to grow, you know, you. You wanna play in a better team, right? So if everyone else are feeling good and everyone else around you are doing better, then if you wanna drive on and you wanna like put in the extra effort, you know that's where you can shine.

And you see that in sports and I think that holds to be true for society. The US was built on a declaration of independence, right? I think what we need today is to sign a virtual or a contract of a declaration of interdependence. I am because you are, and I can only be a mesh or a better person or function in relationship to other human being.

And that's a concept that I think is now particularly in the US also, but it's also hosted be true for Europe. That. How to get back to a civilized debate. There are things that are problematic. There are things that are by all means not perfect, but there are quite a few things that are really good. I mean, we have progressed from a hundred million people to 8 billion people literally overnight.

Ter despite a terrible war in Ukraine and Gaza, no, never have less people. Statistically speaking, based on the whole of our population died in wars than it did today. Never have, we had less people living in absolute poverty. We have never lived longer, had more wealth. So we have had progress.

It's been a steady progress from that, those, you know, KPIs or measurements and could we do better? Certainly. So, and now what we're seeing is that we are kind of, sort of, seeing the rise of a winner takes all market with the technology. And if we don't figure out how to redistribute and how to be share and how to do all of this together, then I think it would be very, very dark.

Dystopian for humanity. We are definitely now creating a very, very elite group of superhuman beings that will either have access to these technologies or also merge with them To that extent that there will be a two different species, and if that super human species has no interest to bring the rest along for the right, then I think we're gonna be in a very, very bad space.

I think this is also when it coming back to what can we do? Of course we need to have, as I said, stable economies. But if our goal could be to have a social ecological market economy, the first thing that we should look at is how does the economy function and how can we have a good operating system?

Then there is something to divide. Then we need good people. And when I met good people. They have, they should have some decency. And that is something that we can work on in education. We can start by, you know, like people should learn how to learn. They should be interested in learning and progressing work with other people.

We should change our educational system, kindergarten schools and how to work together and to get along and to have those values and those that ethical foundation built into our educational models that at least now will not take out psychopaths and crazy people. But it will just generally bring a bit of sense of decency back into civilization.

But it is a very complex topic. But history again, teaches us here that. The pendulum swings back and forth. So one period overcompensates for the other. So while we did have a lot of ideologies and we had all this belief system of, you know, Greta Thunberg, and you know, everyone should have equal and everyone should be equal and we should have the essence of that is respect to others is good, but the way we wanted to implement it, I don't think was the west best way.

And we are now seeing a counter reaction to that, which is also not good. And then the pendulum should find a way that we should respect others and come find ways to get along. But then we also need to put in the work and think, you know. Globally about how to tackle climate wars, pandemics the challenges of AI, obviously, and how to extend organized human life on this planet.

And that's a collective effort. So maybe introducing this declaration of interdependence could be a first start to get people to understand that we are in one boat, so to speak.

Stephen: I like that. I like, do you think, you mentioned pendulum. Do you find that the pendulum swing kind of quickly? Recently, like we went from, you know, hiding magazines of naked women under our. A bed to now these women are making more money than, you know, professional athletes talking about, you know, sexual activity is like all over Instagram and every social media we went from, you know, oh my God, somebody passed away to now we're like praising people that murder CEOs of, you know, large institutions because they're greedy.

Have you seen a pendulum and like, how do you play with it, even as an entrepreneur because it's swinging so quickly? How do you position yourself to make sure that you're not getting canceled over here because you didn't support Gaza versus like, you know, your firm beliefs or firm vision of where we need to go in the future?

Andres: Hmm. No, that's a very difficult question. I think, you know, the powers that I thought was gone I did not see this come to that extent. And I think, you know, the, the precedent now of the US has certainly been a big driver on this change on how quote unquote rulers of a country. And I'd say that in a, in a matter also relating to democracies that this shift from democratic structures where there is decency and people on both sides of at the aisle can come together and, and, and work on topic.

That seems to have shifted much, much farther and much more rapid than I have could ever imagined where we are now in a technocracy where we also, you know, have completely different ways of operating in what. Was unimaginable that people could act in that manner, say the things that they do you know, criticize or attack change opinions literally day by day and just flood, you know, media and humanity with absurdities that confuses us.

And, you know, start to question, you know, if everyone is, you know, right or wrong or whatever. And I did not see that. And so I, I don't think there is an easy way out or an answer to this. One can only hope that the decency of humanity that we have pulled together in the past. And we have figured out that when we understand that some things are existential or are an existential threat to our species or our, our society, then history has showed us that we have the capacity to come together.

And we could only hope that that is true also today, and that we'll figure it out and that we'll find ways to kind of bounce back to more civilized ways of interacting. I think that's needed not only on, you know, the race for technology, but in all matters that we have to figure out, because I think humans are good.

I don't think we're born to kill each other. It's not like we're, you know, come to this world and wanna, you know, build these authorities and kill everyone else. I don't, I, but along the way, we are. Blinded by narratives and, and religion and belief systems and categories and structures where we tap out.

And the nation state is obvious an idea that divides us with borders. And and that is something that moving forward, I, I thought literally that we were, we will move towards a world where we will have a lot more cities, we'll have a lot more locality and lot more globalism. So we will have digital interdependence and local identity where we have a.

A wave of mayors that will become powerful for their region because people like to have an identity, a belonging to something, to an idea. And this idea now seems to be floating around in a very strange way. And I think that's very unhealthy first of all, but I don't have a, you know, clear understanding of what would be a quick fix to that.

So, we can only hope that, you know, the goodwill will, you know, keep coming through and shining a light on a brighter future where we come together and tackle these challenges because we do have great challenges ahead.

Stephen: And with your book, you know, maybe describe a little bit about what the book meant to you, where you came up with this philosophy. How do you think we're gonna. Be bridging humanity and ai. You know, we listen to a lot of the business podcasts. We're like, use AI for this, use ai grow this less, you know, overhead, less head count.

It's like we're so focused on the business aspect that we, I think sometimes we lose that humanity aspect as well. What are your thoughts? How are we gonna do this? Do we, do you mentioned kindergarten. Do we have to start integrating AI from a young age? We're seeing a lot of schools in the US now. Maybe not a lot, but a few have said, we're getting where the teachers we're gonna have AI teach our students.

What are your thoughts about this?

Andres: Hmm.

Stephen: and maybe pulling from some concepts from your book.

Andres: Sure. So, before I get to that know, the last two books that I've written are very technical. It's, they're, they're co-authored. The first books that I've written with a colleague and a good friend of mine. Now, Dr. Florian Chert is a quantum physicist. So what we do there is that we, we we call it sci-fi with a PHI.

So we do science philosophy, we dance, set the outskirts of mind and matter, and ponder these questions of the nature of reality. And so the first book we wrote together was, a book titled X Machina, the God Experiment, where we look at the simulation hypothesis and whether we live in a simulation or not.

So two years ago I was in Vienna and I sat down with 12 quantum physicists, and I asked them if they could prove to me that we are not living in a simulation. Turns out that's really hard. So we wrote a book about that. It just came out five months, six months ago. And that's a very interesting take on, you know, how we perceive reality and what are the borders of, you know, quantum physics and math when it comes to simulating our universe.

So the second book that we wrote just came out is titled Singularity Paradox, bridging the Gap Between Humanity and ai, as you said. And in this book flora and I look at the problem that we see that we are now, you know, racing towards some kind of technical singularity or some super intelligence where we do everything in. A externalized device. So how to vision this, if you were to take like 10 neurons in your brain and replace them by some nanobots or some, you know, artificially created neurons that we know exactly how they work and how they fire and we integrate them. So would you consider, you're still a human, so you can see where,

Stephen: yeah. Yeah. When you ask it on that level, you're like, yeah.

Andres: So you would say, yeah, most likely I'm conscious and I just have those 10, you know, 10 neurons. And you could do that same for a hundred thousand neurons. You could do it for a hundred million neurons. You can do it for a billion for all the way up to 85 billion or a hundred billion. All the neurons in our brain.

So now we have replaced every neuron in our brain with an artificial brain neuron. So do we now have a mind? Do we have a conscious experience? Do we have the exact same thing? You know, if we take every atom in our body and replace them by a, by a artificially created entity, do we have a human being?

Right? That's the question. Or is there a part where we take and replicate Stephen into a robot that is exactly the same exact neurons and the you know, the chemical scum as Stephen Hawking called it, our body is exact the same. So is it now a replica that has a conscious experience or is it a machine that does the same but it doesn't have a conscious experience?

So, so

Stephen: where could I buy one? Because that's exactly what most entrepreneurs listening to this call would love to do, right? They're like, Hey, I don't wanna hire anybody. I just want another of me that has more time.

Andres: Yeah, so you could do that and, but the question then becomes, you know, is there a point where, you know, the lights are still on, but there is no one home to perceive them. So is there a point where our conscious experience of the world gets overwritten? So we could envision a future of superint intelligence being walking around, talking to each other, doing the same, but there are no one home.

Right? So this is a scenario that I call the final narcissistic injury of mankind, where we basically believe with our small monkey brains that we can create this super intelligence and still understand and control it, right? It's it's, we move from. Divine creationism to humane creationism. We wanna have bliss, divinity, and immortality out of the machine the Ex-Machina the God out of the machines you are building. God. So there, there are quite a few challenges here. I it kind of, sort of, if you wanna, if you wanna create your own dad, it would cause some issues philosophically speaking, right? So this is seems to be our path. We just offload our authorities and do it externally.

So what we think is that this is a very dangerous path to be on because we will be to these entities like ants are to us, right? And therefore our proposed, we propose something that we call artificial human intelligence. So here we hack our biology and chemistry and we start off with a human being with a mens.

That I call like from the judici positive, creating human being, and we evolve from there. So we have that human aspect and we enhance that with artificial intelligence. So we take biological substrains and we build that on top of a human being. So we kind of, sort of take evolution into our own hands.

That's our path that we outline in this book. And we think that it's, this is a more promising path for humanity because then at least we have a slight chance to assure that we keep whatever that is, the human aspect of it, right? So this is what we think is is a, a different way to approach it because eventually, if we don't assure it, we are reverse engineering everything from the human being.

But we are not starting at the foundational level at, at life itself and the creation of conscious experience. As long as we don't know what it is. We have no idea how to understand if this entity that we have created would be and behave like a human being. And we think it's utterly too important that we create robots that behave in a similar way as human beings.

Because at large we have managed to survive and keep on if we would have more understanding and more knowledge about how to tackle climate and all that. Then we could do a lot of great things, but we need to ca keep that aspect of, of the humane part of it. So this is the book that we wrote, but I wanted just to touch on your question on what can entrepreneurs and leaders do.

So I wrote a book last year titled The Viking Code. And the Viking Code is is much more accessible from a writing. It's also very autobiographic. And there I look at how I think organizations will be in future and it's more of a leadership manual and how to cope with it and what to define as success.

So in my home country of Norway, only 5.5 million people, we have a lot of great athletes just coming out. Young boys in, well, some of the largest. Sports around the world, tennis, golf, soccer, you name it, you know? And I was looking at why is that from a country that does not have high performance baked into the nature, we are more of a collective, but we have produced these individual athletes.

And I was very curious about that formula behind that success. And then I realized that they're not only successful in their domain, they're also loved and cherished and values by competitors and referees and, and media alike. And that to me was very, very interesting. So this book is very accessible. And it's about how to prefer to build a high performance culture that is deeply rooted in values.

And that coming back to the question of what can we do? What should we do? This is probably the book where I describe how I envision a future on how to build organizations and how political leaders and business leaders alike can adapt these mentality where we do not measure life success in finite goals, in gold medals and the fortune of fame, but in our micro ambitions to have progress.

So circling back to when you talked about the nine to five in the beginning. To me, success is just that. And I lived a life where I was good at sport. I built my companies, I sold my companies for many of that was success, but I never experienced success. So I stopped doing that. And today I consider myself highly successful for me because I get to get up every day and experience and learn and have that experience of my own experience.

And that to me is life. It's a life philosophy. And if we could adapt that, I think we could both achieve amazing things because the quality of our input, what we do is very high. And then we will have a very good output. So take the analogy of a marathon. You, you, you wanna have a good time at the end, but the quality of your next step, it was, makes the output.

So the quality of your input, the micro ambitious step cumulates into a good time and everything in life is like that. It's like. It's kind of sort of like the the compound interest rate, you know, it's basically life health, wealth, whatever, and, and get conscious being back into that state of being you know, enjoying nature, enjoying other people, enjoying life, seeing the wonders of normality, the wonders of the ordinary, so to speak.

That I think is a very important task to take us out of that new existentialism of being undead and just reacting and functioning into being active and doing so. That also is a big part of that new book that came out. How to Bridge the Gap is also taken back the agency of what it means to be a human being.

Stephen: I have so many questions based off of that. I'm gonna start with the book. Do you think we're losing some of that humanity, like you described, how we can keep it, do you think the competition we already saw when, you know, deep seat came out, the competition to, you know, be the best in AI is removing that.

Like, hey, nobody's really thinking about humanity when we have to beat China in the AI or tech race. And do you think we're losing some of that humanity just 'cause of sheer competition versus creating

Andres: Yeah, I definitely, I mean, this philosophical zombie state are manifest in many levels. We have created an optimization society and if every one of us just gets up in the morning and has to react and function and solve tasks that become very exhausting. You know, I think life is a wonderful journey to nowhere.

But it's all about experiencing that journey. And if you just react and function, you don't have that experience, you're just solving tasks. You're literally a very, very bad memory device that is slow and incompetent and have a very, you know, slow output mechanism. Two fingers to, to print something.

We cannot compete with machines. And the more we push for that, the more diverse off we're going to be. So I think it's very important, and this is a new existentialism in philosophy that I think, and I'm writing about this right now we used to have the anxiety of finitude of life. We are now solving that.

We're taking that out. So what then becomes the challenge, you know, then the challenge become the liveliness, the ish kite, as you would say in German, the vitality of life itself to experience your own experience. And yes, I think we are in a state. Where we are losing that. And that also takes a lot of the humanism or the humane aspect of working together out of it.

And that does not necessarily mean that we will or should slow down progress. It's just that we have to figure out how to have that agency of the feeling of creation that I think is a very part, big part of being alive. It's not so much about the search for happiness or purpose, it's about giving a purpose to life, you know, to give meaning to life.

And that is about that agency. And yes, I think we are to some extent losing that agency at the, at a high speed.

Stephen: I think, you know, when you talk about the information versus knowledge, I think we're also absorbing too much information. We have this guy saying, Hey, you have to be alpha male and have a six pack. Even though they're on steroids, you have this other person saying, Hey, you have to live in your van.

Happiness is, you know, bigger than economic. Like there's so many people telling us what should make us successful and what should make us happy, that we're kind of like listening to everything and we're not listening to ourselves. That's what it feels like for me, that there's so many people saying, Hey, you're bad if you do this, or, Hey, you're not good if you do that.

Or, Hey, DEI is good and now two years later, DEI is bad. It's like hard for us to kind of like filter out all that noise and be like, Hey, what does my gut say? And I think this is why a lot of companies are getting into like gut health. 'cause they're realizing like, hey, our instincts are are where, you know, the happiness and the success and the the physical.

I think health will come from.

Andres: Yeah. And also, I mean, it's not that, that's, again, we are back to that right or wrong, right? That binary way of thinking. And, and life is not binary and, and it is not about balance. Nothing is imbalance, right? It's about a strive towards a dynamic equilibrium. So that means that when you are feeling bad, you know, it's not about forcing you to feel good, it is about making you feel less bad.

So it's not about like finding happiness, it's, it's more like, am I not happy? Am I unhappy? Right? Am I feeling down miserable? Okay. Is there three things that I can write down where I feel miserable? And most of us could write down a couple of things that we think, you know, makes us feel bad. And now you can say, okay, can I take out one of these things?

You can, I end a unhealthy relationship, or can I change some behavior here? Can I do something? And then you understand that yes, there are so many things that we can do. We have, you know, in in, in comparison to previous times, we have never been more in a better position to influence our own reality. Be it to influence what we can do and should do, but also how we react to outside circumstances.

So if we say that we thin slice that and we take out one thing and we do it again, we take out two things. And now you take out things that are holding you back, dragging you down, making you feel unhappy. And the more you do that and you remove yourself from that state, you put yourself in a position to be struck by something called happiness.

You set off time, you block that, you do that. And that's that incremental progress where you put yourself in a position to become happier, to be struck by happiness. And the same, I think also goes for, for the whole purpose and meaning, we shouldn't do this or that we should figure out what we do not do.

And there's a very good analogy here to the. Perfection of life, the beauty of life from Michelangelo. So when he created a statue of David, he was asked, how could you create this beauty? You know, how does the master plan, and how did you envision like the finitude of this? And he said, that was very simple.

I just removed anything that was not David. Right? And, and this is a very good analogy for life. It's not about finding that one eye. If you're not happy in the place that you are, you're not set in stone. You're not a subject. And I, we have too many people running around seeking trauma to our seeking. Why did I become, how I became, why did I become this?

Instead of asking, am I happy with this? And if not, you know, what can I do to become less unhappy? And this is, you know, in, in any extent to the extreme, Anitha writes about this, you know, in the darkest abyss. You only see the light. So at the end point of life, there is only one thing. And that's progress.

You know, it's a strive forward, even if it's a very small progress. That's all we have, man. We could take an end to this and we could move off right then we could to turn it off you know, instantly. But that's is not an option because life has wonders. And if it's miserable, you know, it's about making it less miserable.

It's not about making it perfect. And I think this is, these are many lessons from philosophy and, and, and, and things that we could, could take back and, and become you know, in, in a state that we can come out and reflect and think about what you just said with the intuition and, and, and everything. And, and this is something that I think a lot about.

You know, how can we become how can we see philosophy as a practice? How can we apply practical philosophy to life? How can we you know, move from that reaction and, this possible wisdom that I mentioned earlier is, is something that I picked up from it. It's also a philosophical foundation, but from a a, a great inspiring person from Sweden who founded something called the Gap Miner Foundation.

He passed away 10 years ago, way too early, but there are great videos, YouTube videos to watch of him, and he explains the world to us how it really is. His book was called Factful. And if we see that, how many good things there are and how many things we can influence, and how many things that are foreseeable, where they will go, you know, and, and what will happen.

So if you understand. The possibilities of technology. And you pair that with likeliness of how human beings will behave, then you can anticipate the future and you can create now from the future. And that is a very fulfilling task because you start to see very plausible scenarios of what a future can hold.

And I think this is one of the most lacking skills in leadership today, anticipatory leadership, to think plausibly about potential futures. And I always ask, like when I talk to CEOs and I say so what do you think will happen? Do you think we'll have a Nokia 51 10 and play snake and have a bandwidth of 128 K in 10 years from now?

Or do you think it will continue? Will we have progress? And everyone says, we'll have progress, right? Say, but why do you behave as if there were would be stagnation? We have studies.

Stephen: won't because their, their futurism, their possibilities are based on their next bonus, not the next five

Andres: also that, also that, but also to the center. We look at studies of the past and, and project that into the future. And that's a very dumb idea, right? So this is not, and so I think this is a skill and that also has a lot with a fulfilling life. And to fill your life and to live your life. It's to anticipate future scenarios and then to create that and to future.

And that's the activism and that takes you out. And I think that's kind of, sort of circles back to where we started to say that that takes us out of that reaction because reaction is functioning on behalf of an externalized device that's trying to keep up with the machine. And that's a terrible state of mind to be in for a human being or a state to be in, physically.

So I think taking back that agency, anticipating the future, and being an active agent where you put, and you fill and you do, and you. Do and you act. That is something that we need to work on. And I think that's where the potential is also for tackling a lot of these great challenges ahead.

Stephen: I think this will be the first podcast this year that mentions agency and ai and without using it in the words identical ai, I think you've hit on so many great points. And it's so funny this, you know, people will be like, this is very philosophical, but this applies directly to business, directly to leadership, directly to what, where I think a lot of entrepreneurs struggle.

They're not stru struggling in CAC and LTV. These are the areas that their, you know, their mind is zapped. These are the areas that they're losing so much focus. 'cause they're dealing with a lot of these existential issues that you brought up in this conversation. Where can people find you? What's out there?

Now we know the singularity product paradox is out. Is there anything else you want to direct people towards as we end this conversation?

Andres: Sure. I mean, Stephen maybe for the entrepreneurs who come up with the entrepreneurs and what they struggle with, so I've just finished something that I call the triangular alchemy of Modern Business, and it writes about the anticipatory leadership part that I just mentioned. And I've been looking at some of these successful organizations and you see the tech companies in the valley and the big, you know, conglomerates with Alpha batch and also Microsoft pivoting and, and how they are now set up.

And it's not per design that they have, you know, reshaped their organization. It's a consequence on how the world have changed. I've tried to put that into a structure for any company of any size, in any industry, and I think we're gonna move away from those, you know. HR C-T-O-C-M-O-C-S-O structures and silos in a hierarchical structure.

Even though it's become more decentralized and flat and everything, it's still pretty much those pillars. What I envision are three pillars that are put together in that triangle. And the first pillar is for, so for means that you forge your clients or you incubate your own clients. So successful companies today, they don't sell.

They have clients come and buy. And you do that by supporting your clients to grow. So if you take, you know, cloud-based solutions, take a Google or whatever, you want companies to grow because then they'll have more license and they come and buy from you. You don't sell them, you know, additional licenses.

You just help 'em grow. So you do that. You can think about this as a very old concept. Every bar, every restaurant was built on that concept. Beverages, companies came in, gave them free products, gave them marketing, gave them support to set up the bars, to set up the restaurant. The good thing here is. The analogy goes for the beverages companies alike is that you are then with the client.

You are at the front. You're not in the boardroom thinking of strategy. You are working and learning. So you're learning about the new trends. You can adapt to non-alcoholic beverages. You can adapt to the latest features, whatever. This is one pillar. The second pillar is about investment. Any company today, small or large, need to operate as a vc, to have a branch that focuses, that is detached from core business, that focus on investment for building assets for the future, but also coming up with incubation of ideas or concepts or businesses that you could.

Tear down your core business at the end of the day, or you can adapt it for the future. So you need to have like the forge part, you have the investment part. You operate with one unit as a vc. And there also you are at the forefront of technological change. You can learn from different industries, you can pivot, adapt things to your business that didn't fit and wasn't like meant to fit.

But it just happened to because you saw some patterns that you wouldn't have seen if you had been in the company. And the third pillar is efficiency. So anything that we have tied to sustainability and all that now goes into efficiency only. Efficient usage of resource sources will lead to profit. So operational excellence, agility, agility requires a high sense of stable ability, right?

You need to have stable processes in order to be agile. So everything becomes a part of efficiency. You will see that with energy 10 to 12 years from now, the marginal cost of energy will plunge towards zeros because the most efficient energy sources will have been beating up all the old ones that's, you know, fusion sun.

You know, whatever, those will be infinitely cheaper than they are today. So we will have an access to energy as we do with intelligence today. That's the hyper efficiency race. So what you see now is with that three pillars, you have three different entities that fight for the same resources. If you have anticipatory leadership at the heart, so you build trust in the future, trust in the organization, now you have three sections or pillars in the organization fighting for the same resources.

Now you have friction. Now you have friction so that people talk to each other and they discuss how to move on. You have a very dynamic organization, and I call this the becoming organization. We now have a lot of static organizations that have a product, a business model. They export that. This concept is to me with technology, rock debt, we need that dynamism, and that's the becoming organization.

So if you set up a company today, I would go out and design the company from scratch like this. And I encourage, keep people in business to think about how they can create the becoming organization or what I refer to as the triangular alchemy of modern business. So maybe that's some useful.

Specific tips ending off our great conversation having gone down the rabbit hole of simulation and pondering the singularity. So, yeah. And I'm on anders inset.com. That's.

Stephen: You're like, yeah, and you can find me on social media, but I just laid out exactly how to be successful in business for the next 20 years. Or you can just like log on and buy one of my books.

Andres: No, no, I don't, no,

Stephen: a mic drop

Andres: no, it's just that, that, that's where I update these ideas and concepts and they're dynamics. So, so the books to me I mean, I'm always happy if people read books, but to me, to writing is thinking. So, I am a nightmare to publishers to that extent because once I've published one book, I start writing.

So many go on tour and do all this. But, so I'm happy when people wanna read. I, I enjoy that, of course. But to me, look, this is about ideas and thoughts that I'm working on constantly. So if people are interested, feel free to, to read up and, and, and log in there. And yeah. So, and,

Stephen: And there's, we're due for a part two. We didn't even get into crypto quantum computing, but I thought this was so valuable for our audience. So we're definitely gonna do a part two. We'll talk a little bit about crypto, maybe in, you know, six to eight months and we'll see how far AI has come from this conversation.

And there's, thank you so much for joining around the coin. I feel like we're so much more enlightened and that last three minutes was epic.

Andres: Thank you so much, Stephen. Thank you for having me.