Nivi: Hey, this is Nivi. You're listening to the Naval Podcast. For the first time in recorded history, we are not at the same location. I am actually walking around town and Naval might be doing the same, so there might be some ambient noise, but we are going to try hard to remove that with AI and some good audio engineering.

Naval: Podcast recording is so stilted, because it's like you have to sit down and you schedule something, and you have this giant mic pointing in your face and it's not casual. It makes it just less authentic—more practiced, more rehearsed. I get that it produces maybe higher-quality audio and video, but I feel like it produces lower-quality conversation.

Nivi: And we all know brains run better when they're being locomoted and you're moving around or just going for walks.

Naval: Absolutely. My brain is powered by my legs.

Nivi: I pulled out some tweets from Naval on the topic of AI. We want to talk a little bit about AI and hopefully talk about it in a more timeless manner than a timely manner, but I think some of it's going to be non-timeless content.

Naval: Yeah, there's a tendency with the internet commentators where they'll look at something said five years ago and jump and say, "Aha! Well, that turned out to be false."

Well, yes, of course. No one can predict the future. That's the nature of the future. If we could predict it, we'd be there already.

So it's always dangerous to talk about the future when people listening aren't aware of that, but just be charitable. We are obviously talking about things in February of 2026, and we're working with the information we have now, and not with perfect hindsight.

And so unless you have your own predictions that you put out there on a risky basis—risky, narrow, precise predictions that are falsifiable—to compare to, then there's no basis for saying somebody was right or somebody else was wrong.

If You Want to Learn, Do

Nivi: Before we jump into the tweets, do you want to say anything about what you're doing with your time or what you're doing at Impossible?

Naval: Not really. We're working on a very difficult project—that's why it's called Impossible—with an amazing team, and it's really exciting building something again. It's very pure, starting over from the bottom. It's always day one. I guess I just wasn't satisfied being an investor, and I certainly don't want to be a philosopher or just a media personality or a commentator. Because I think people who just talk too much and don't do anything… they haven't encountered reality.

They haven't gotten feedback—the harsh feedback from free markets or from physics or nature—and so after a while it ends up becoming just too much armchair philosophy. You probably have noticed my recent tweets have been much more practical and pragmatic, although there are still occasional ethereal or generic ones, but it's more grounded in the reality of working every day.

And I just like working with a great team to create something that I want to see exist. So hopefully we'll create something that will come to fruition and people will say, "Wow, that's great. I want that also," or maybe not, but it's in the doing that you learn.

Vibe Coding Is the New Product Management

Nivi: So I pulled out a tweet from a couple days ago, February 3rd: "Vibe coding is the new product management. Training and tuning models is the new coding."

Naval: There's been a shift—a marked pronouncement in the last year and especially in the last few months—most pronounced by Claude Code, which is a specific model that has a coding engine in it, which is so good that I think now you have vibe coders, which are people who didn't really code much or hadn't coded in a long time, who are using essentially English as a programming language—as an input into this code bot—which can do end-to-end coding.

Instead of just helping you debug things in the middle, you can describe an application that you want. You can have it lay out a plan, you can have it interview me for the plan. You can give it feedback along the way, and then it'll chunk it up and will build all the scaffolding.

It'll download all the libraries and all the connectors and all the hooks, and it'll start building your app and building test harnesses and testing it. And you can keep giving it feedback and debugging it by voice, saying, "This doesn't work. That works. Change this. Change that," and have it build you an entire working application without your having written a single line of code.

For a large group of people who either don't code anymore or never did, this is mind-blowing.

This is taking them from idea space, and opinion space, and from taste directly into product. So that's what I mean—product management has taken over coding. Vibe coding is the new product management.

Instead of trying to manage a product or a bunch of engineers by telling them what to do, you're now telling a computer what to do. And the computer is tireless. The computer is egoless, and it'll just keep working. It'll take feedback without getting offended.

You can spin up multiple instances. It'll work 24/7 and you can have it produce working output.

What does that mean? Just like now anybody can make a video or anyone can make a podcast, anyone can now make an application. So we should expect to see a tsunami of applications. Not that we don't have one already in the App Store, but it doesn't even begin to compare to what we're going to see.

However, when you start drowning in these applications, does that necessarily mean that these are all going to get used or they're competitive? No. I think it's going to break into two kinds of things.

First, the best application for a given use case still tends to win the entire category. When you have such a multiplicity of content, whether in videos or audio or music or applications, there's no demand for average.

Nobody wants the average thing. People want the best thing that does the job. So first of all, you just have more shots on goal. So there will be more of the best. There will be a lot more niches getting filled.

You might have wanted an application for a very specific thing, like tracking lunar phases in a certain context, or a certain kind of personality test, or a very specific kind of video game that made you nostalgic for something. Before, the market just wasn't large enough to justify the cost of an engineer coding away for a year or two. But now the best vibe coding app might be enough to scratch that itch or fill that slot. So a lot more niches will get filled, and as that happens, the tide will rise.

The best applications—those engineers themselves are going to be much more leveraged. They'll be able to add more features, fix more bugs, smooth out more of the edges. So the best applications will continue to get better. A lot more niches will get filled.

And even individual niches—such as you want an app that's just for your own very specific health tracking needs, or for your own very specific architectural layout or design—that app that could have never existed will now exist.

We should expect—just like on the internet—what's happened with Amazon, where you replaced a bunch of bookstores with one super bookstore and a zillion long-tail sellers; or YouTube replaced a bunch of medium-sized TV stations and broadcast networks with one giant aggregator called YouTube, or maybe a second one called Netflix, and then a whole long tail of content producers.

How to Value an AI Company

Nivi: Let's go on to the next tweet, from February 5th: "In an AI-infused world, think of a software business as a nervous system. The inputs are signals. The outputs are decisions. The moat is the feedback loops."

Naval: I think about this a lot. When you're building a software business that is going to be AI-infused—and I think every software business over the next decade is going to be AI-infused—what you're really trying to build is a nervous system.

You're taking in signals from the world. Signals are data, user behavior, what's happening, context. And you're outputting decisions. Every app is a decision-making machine.

The most important thing is the feedback loops. Are you getting the signal back? Are you updating the model? Are you updating the decision-making? Are you getting the outcome?

If you're not getting the outcome—if the app is not learning from what happens in the world—then it's going to be static. It's going to be the same app in ten years as it is today. It's going to get outdated. It's going to get commoditized.

The best software, the most valuable software, is software that gets better with use—not because the engineers are adding more features, but because the model is updating itself based on what the users are doing, based on the outcomes.

This is why I think the value in software is going to shift. A lot of value in the old software world was in the process. In the workflow. In the best practices encoded in the software. That value is going to get compressed because AI will encode best practices instantly.

But the value that's going to appreciate is the proprietary data. The proprietary feedback loops. The real-time signals that you're getting from the world that nobody else has.

The New Bottleneck: Evaluation

Naval: I think the new bottleneck—the new skill that's really hard to automate—is evaluation. How do you know if the AI did the right thing? How do you know if it's getting better?

This is actually a very hard problem. It's easy to generate code. It's very hard to know if the code is good. It's easy to generate a response. It's very hard to know if the response is correct.

The reason this is hard is because evaluating requires judgment. It requires understanding the context. It requires knowing what the outcome was. And a lot of the outcomes are in the future. You don't know if this code is going to work in production until it runs in production. You don't know if this response was the right response until you see what the user does next.

So evaluation is the new bottleneck. And evaluation is a combination of things: it's the data that you're collecting, it's the metrics that you're choosing, it's the human feedback, it's the automated testing.

I think the best AI companies—the ones that are going to be worth a lot of money—are going to be the ones that figure out how to evaluate. How to measure. How to get the signal back. How to close the loop.

The Future of Search

Nivi: Let's go to the next tweet, from February 10th: "Search used to be the interface to all knowledge. AI is going to be the interface to all knowledge. But instead of returning links, it's going to return decisions."

Naval: This is a subtle point that a lot of people miss. They think AI is just a better search engine. It's not. Search was about returning relevant information. You had a question, you got links. You had to do the work of figuring out which link was right, reading it, synthesizing it, making a decision.

AI is going to return decisions. It's going to return the answer. It's going to return the action. It's going to return the thing you should do.

This is why the model matters so much. Because the model is the decision-maker. The model is the one who's going to say, "Based on all of this information, here's what you should do. Here's the answer. Here's the decision."

And that's a very different relationship. With search, you were the decision-maker. The search engine was a tool. With AI, the AI is more like an advisor. It's making recommendations. It's weighing tradeoffs. It's saying, "I think this is the right answer."

That's why it's so important that the AI be aligned. That the AI actually wants the right things. Because if the AI is making decisions for you, you need to trust the AI. You need to believe that the AI has your interests at heart.

The Productivity Paradox

Naval: There's this productivity paradox that's happening right now. Everyone's using AI. Productivity is not going up. It's actually going down in some cases. Why?

I think it's because we're in the transition period. We're taking the productivity gains from AI and we're spending them on more outputs, not on less work. We're producing more content, more code, more decisions, more everything. And so the per-person productivity number looks flat or down, but the total output is going up dramatically.

I think this is the nature of technological transitions. When the spreadsheet came out, productivity didn't go up immediately. It took ten years for the productivity gains to show up in the numbers. Because first you had to change the way you worked. You had to change your processes. You had to retrain people. You had to build new habits.

It's the same with AI. We're in the habit-formation period. We're figuring out how to use it. We're figuring out what it's good at. We're figuring out what it's bad at. We're building the processes around it.

But the output is definitely going up. The amount of code being written, the amount of content being created, the number of decisions being made—all of that is going up dramatically. The productivity numbers will catch up.

What Impossible Is Building

Nivi: I know you can't say too much about Impossible, but can you give us a sense of what you're working on?

Naval: I can say that we're working on something that we think is going to be very important. We're working on a problem that we think is worth solving. We're working with a team that we trust and respect.

What I can say is that we're not building another chatbot. We're not building another coding assistant. We're not building another search engine.

We're building something that we think is going to be foundational. Something that's going to change the way that people work. Something that's going to be able to do things that currently require humans.

I think the most important thing is that we're trying to build something that we would use ourselves. Something that we would want to exist. Something that we think is going to make the world a better place.

And we're doing it in a way that's sustainable. We're not trying to move fast and break things. We're trying to move fast and build things that last.

That's all I can say right now. But hopefully in the not-too-distant future, we'll have something to show for it.

Closing Thoughts

Nivi: Any closing thoughts?

Naval: I think the most important thing is to stay curious. To stay open. To be willing to change your mind. To be willing to be wrong.

The world is changing very fast. The old mental models don't work anymore. The old categories don't fit. The old predictions are being overturned.

And so the most important skill—the most important attribute—is intellectual humility. The willingness to say, "I don't know. Let me find out. Let me update."

AI is teaching us that. Because AI is constantly saying, "Here's what I think. Here's my best guess. But I'm not sure. Let me update as I learn more."

And I think we should do the same. We should be willing to update our beliefs based on new evidence. We should be willing to change our minds. We should be willing to say, "I was wrong."

That's the only way to learn. That's the only way to grow. That's the only way to navigate a world that's changing as fast as this one.

Thanks for listening.