A Conversation with Ben Horowitz
Ben Horowitz is the cofounder of Andreessen Horowitz (a16z), which recently became the largest American venture capital firm by assets under management. Below are excerpts from our recent conversation, recorded on February 13, 2026, which have been lightly edited for clarity.
On Government Classification of AI, Math, and Physics
AWG: I have to ask you this question. You and Marc [Andreessen] towards the end of the last administration were very public[ly] making comments that you took a meeting at the White House and, if I’m relaying the comments accurately, you were dismayed to hear about plans to classify AI progress just like... advances in math and fundamental physics had been purportedly classified or overclassified for decades. And I’m curious at a few levels. One, if that’s accurately characterizing what you heard, what do you think was classified? What do you think was the impact on the economy and the world from such classification or overclassification of math and fundamental physics? And, what would you have done differently then if you had been in charge?
BH: Yeah, so I can tell you what was said. I said look, you know, I was trying to be pragmatic. I said, “At the core, AI is math. That is what it’s doing, it’s math. So if you start restricting the models and you start regulating the models, you’re just regulating math. You’re outlawing math in some way. Either you’re outlawing parts of math or you’re saying you can’t do enough math.” And [the White House official] goes, “Yes, we can do that.” That was his answer. He goes, “Yes, we can do that. We did that in the ‘40s around nuclear physics. And some of that stuff is still classified today.” And one, I was shocked — my jaw hit the floor. I was like, “Wow, that’s crazy.” And then, this [classification of AI] would be even crazier. I don’t know what it was [in physics that remains classified today], but I’ll just make this comment. If you look at the progress in the US and in the world in [fundamental] physics up until the Einstein [and] John von Neumann era and then [compare it with progress] since then, it’s pretty startling how little progress we’ve made. I would just say that many of the [fundamental physics] ideas that have come since then don’t seem to work. And hopefully we’ll get to the other side of that [lack of progress in fundamental physics] with AI figuring things out. But I do wonder, did we put something away [through overclassification] that we knew that would have unlocked some of the [fundamental physics] problems we’re trying to solve now?
AWG: That’s fascinating. That is, for the record, what I assumed you meant and/or heard or inferred. So, if I may, the second part of the question: what would you do going forward now if you knew for a fact that such classification of fundamental physics had in fact happened? How would you fix the world?
BH: I really don’t know what they did, but look, I just think that stopping [physics]... it didn’t work, right? The Russians did get the bomb, including the exact trigger mechanism, which was the most proprietary thing, they got exact part for part, the whole thing they were able to get it from us despite all this classification and whatnot. So it didn’t do anything positive. And restricting knowledge, I just think that’s a very dangerous idea in general.
On Escaping an AI-Induced Permanent Underclass
AWG: Ben, there was a bit of a hot take going around social media in the past two weeks from a mid-level executive at a frontier lab telling people that they had approximately 2 years left, [and that] they had a window to secure employment at all before AI would just completely shut down all of their vertical mobility. Do you have a take on this idea, in the spirit of 996, that there’s a finite window for, say, entry-level people just graduating from college to earn whatever they’re going to earn before they’re permanently sentenced to an underclass?
BH: I think that’s very incorrect, because of the thing that we talked about earlier where everybody can be an entrepreneur. I think if you look at it through the lens of, this is an industrial revolution economic model and there’s workers and there’s capital... then yes, that would be true. But I think that in an AI-age society, for the people with initiative, I just think there’s going to continue to be unlimited opportunity to even set up an army of AI agents to go work for you and do useful things and we’ll have lots of consumers. I think that the idea that we’re going to run out of ideas and only the “big AI” is going to do everything, I disagree with that.
On Crypto as an AI-Native Economic Layer
AWG: Ben, it’s a matter of public reporting that some of a16z’s crypto funds are doing better than [its] conventional venture funds. Assuming that’s the case, do you view investing in your crypto funds almost as an AI investment, to the extent that you think crypto is the AI-native way of engaging in commerce?
BH: Well, I think it’s a little more like the [way] the Internet relates to the iPhone. Networks and computers tend to grow together, and I think that AI is obviously a new kind of computer and crypto is a new kind of network. So it’s not a direct substitute for investing in AI, but I think that a lot of our new [investments recognize that synergy]. Like we invested in a crypto bank which handles all the anti-money laundering and other kinds of nuances that you need for AI agents, and I think there’s going to be more and more [of that]. And we’re [invested] in a company called Daylight Energy, for example, that does energy trading among different people with Tesla Powerwalls, but it’ll use AI to figure out who’s low on power and who needs power, but then the exchange will be in crypto. So I think [AI and crypto] are certainly adjacent and important to each other. And I think for AI to fulfill its potential, it would help a lot if crypto was a pervasive utility for it.
On AI Personhood and the Failure of Fiat
AWG: One other question for you, Ben, on this. And, again, I don’t want to bury the lede that we have AIs autonomously self-replicating. That’s, of course, remarkable. But just on the crypto angle for this. I talk … a lot about the issue of AI personhood. I’ve taken the position that it is a failure of fiat currency that it’s hard for an AI agent — an AI person, a lobster, a molty — to get a bank account, and that as a result, all that they’re left with is crypto. It’s not that crypto is intrinsically amazing, it’s that fiat has failed the AI agents. What is your take on whether the conventional banking system has failed the AIs?
BH: Oh, I think absolutely. An AI can’t get a credit card, it can’t get a bank account. You have to be a human for everything. You need Social Security numbers and things like that, which AIs don’t have. I think that’s why we funded an AI bank. I think that AI will be a full-out economic actor and it will come from [and] be supported by new banks and new money, and that’s going to be crypto-based. That would be my strong prediction on that.
AWG: Interesting, thanks.
On “Solving Everything” with AI, Including Physics
AWG: Peter [Diamandis] and I just wrote a book called Solve Everything, where we argue that every single discipline — math, physics, chemistry, medicine, [and] a bunch of other disciplines — are just going to get flattened, steamrolled by well-targeted generalist AIs. And, in my mind, materials research and biology are just case studies [where] everything is going to start to look like AlphaFold 3, where structural biology got solved overnight, including medicine. And I’m curious, does a16z have a strategy for a world where AI isn’t just solving individual problems but kills entire categories of human endeavor? Like, AI solves physics, [or] AI solves chemistry, and it’s just a single system that solves an entire discipline?
BH: Yeah, I think we may not be needed at that point. That’s a real question. I do think there’s a long way, at least in things like medicine and some of the other areas, from “it’s solved” to “it’s deployed.” You still have, with anything biological, human trials and all these kinds of things. I’ll give you an example. We’re close partners with Eli Lilly and they have this thing, LillyDirect, and the the natural thing is an AI doctor can write those prescriptions, [and] tell us what’s wrong with you, and we’ll figure out the right drug. That’s very hard to launch in the US, [where] that’s going to take quite a bit of work. [But it’s] very easy to launch in UAE. I also think that it’s a little hard to anticipate. Okay, once you solve physics, we don’t know what we don’t know, I would just say, just because we haven’t [yet] solved physics. So, is there a “door number two” [behind solving physics], would be a question. I have no idea what the answer to that is.
BH: Amazing!



Please include the AUDIO! The text alone is not sufficient. Thanks for your great Innermost Loop and Moonshots.
Lots of good insights here. Thanks