A Conversation with John Werner
John Werner is the Founder and CEO of Imagination in Action and a Managing Director at Link Ventures. Below is a transcript of our recent conversation, recorded on December 9, 2025, which has been lightly edited for clarity.
NeurIPS and the State of AI
JW: You just were on the West Coast, 30,000 people at a conference for basket weaving? What what was the conference you were out at?
AWG: The conference was NeurIPS, the Neural Information Processing Systems Conference, which is the largest AI conference in the world.
JW: You’ve been tracking AI as well as anyone I know and I’m curious, any takeaways from being there? And did it, you know, fortify your thinking of the direction places are heading? Any new ahas?
AWG: Many ahas. [Here are my] top few: in the hallways the most spoken language that I heard was Mandarin. I thought that was notable. The American frontier AI labs have largely gone dark at this point when it comes to publishing at the the top AI academic conferences, leaving bit of a power vacuum, if you will, that Chinese frontier labs are quite visibly rushing in to fill. Alibaba had, I think, more than 130 papers, including a best paper award at the conference. I think that’s pretty instructive.
AWG: There were humanoid robots. I think robotics – just trying to capture the zeitgeist of the conference — robots, and humanoid robots in particular, are now widely perceived as the obvious next big thing after AI agents. There was a “solve everything,” I think, spirit to the conference as well. The Chan Zuckerberg Initiative, founded by Mark Zuckerberg and his wife Priscilla, which is now sort of rebranded as Biohub, was very visible at the conference. And their branding of using AI to cure all disease, I think very much embodied the spirit that at this point AI is very likely to solve essentially all math, science, engineering, and medicine problems in the next few years.
AWG: Previously the the Chan Zuckerberg Initiative, CZI, when they were founded had the mandate that they were going to cure all disease in the next century, and now it’s just in the next few years. So timelines have have sped up.
AWG: And I think the perception—I would say my perception—is whereas previously maybe there was some discussion in technological communities about a Singularity, a Technological Singularity when, either depending on your definitions, AI would someday become smarter than all of humanity or the pace of change would accelerate so much that we couldn’t predict what was going to happen from one day to the next... I think it’s pretty clear at this point that that was all an optical illusion. That, in the same sense that a mountain appears to be sort of a a point or a vertical line at a distance, but when you get up close there are foothills and it’s more of a smooth gradient, same sense here. I think it’s pretty clear at this point the Singularity was just an optical illusion and we’re in it. We’re at least in the foothills of it.
AWG: And at this point we’re drowning in artificial intelligence. We’re going to be drowning in a lot more artificial intelligence in the next two or three years. And the sense of the conference [was, in the] next two to three years we’re going to see utter transformation of – not just solving intelligence – but using intelligence to solve everything else.
The End of Moore’s Law & The Rise of AI Economics
JW: Moore’s Law has been talked about a lot. Can you walk us through what the equivalent of Moore’s Law for how AI is transforming? Like what are the X and Y axes? You know, where are we on that? You’re sort of touching on that. What do you know that that most people don’t know? You know, I think some of the AI experts 5 years ago predicted what’s happening right now was 90 years away. You know I don’t think you predicted that, you said it was it was going to happen this fast when I talked to you. But is the next five years going to be even faster? Just, you know, make the comparison: Moore’s Law to AI law.
AWG: Moore’s Law, for those not tracking, founded/coined by Gordon Moore, co-founder of Intel. The original prediction was that the the density of transistors that fit on a chip would double every 18 months, and then it was later amended to be every 24 months. But basically doubling of transistors every—or transistor densities—every two years.
AWG: Moore’s Law is basically dead. That’s, at this point, only a bit of a mild hot take. It it’s dead. Thermal limits were hit. Those who remember the gigahertz races of computer chips in the ‘90s...
JW: Pentium?
AWG: That’s right, Pentium. For those paying close attention, do you remember how the speeds, the clock speeds of microprocessors, plateaued at around 4 GHz and basically stopped after that? That the so-called Dennard scaling limit was hit. So Moore’s Law, which survived a bit past hitting the thermal limit in the mid 2000s, is basically dead as well. It’s in its final stages. It’s being stretched out where it’s no longer doubling aereal densities every two years; now it’s every 3, 4, 5 years and it’s just going to asymptote – probably – out until we have some next major revolution. That’s Moore’s Law.
AWG: AI is on a totally different... it’s called an experience curve. That a generalized term for things that look like Moore’s Law over time. There are so many experience curves for AI. One of my favorite ones of late was an experience curve that—there are a few different variants of it—but OpenAI has done their own analysis of the cost of intelligence, the cost of artificial intelligence per dollar over time, and found that the median cost of AI per unit dollar is deflating by approximately 40x per year. So humanity has never seen 40x year-over-year financial deflation anything like this, to my knowledge.
AWG: And I think you can nitpick, “oh well is it is it actually a 1,000x per year or is it only 10x per year?” But, ballpark, having the cost of something—let alone the cost of something as generally useful as intelligence—reduce by 40x year-over-year sustainably over multiple years is something humanity has never seen.
AWG: And my expectation is: right now, sure that takes the form of a leap from chatbots that you can have fun but relatively superficial conversations with, to reasoning models that you can ask hard problems in math, science, and engineering to and actually get novel solutions out, to a bunch of things that that I expect to happen after that. This intelligence, this 40x-ish year-over-year deflation, it’s such a black hole that I would say there’s no way it remains confined to data centers. When you have the cost of something as fundamental as intelligence deflating so quickly over such a sustainable period of time, that’s going to pull in the rest of the economy.
AWG: We spoke earlier about service economy jobs, knowledge work jobs, being automated. The cost of intelligence dropping sustainably 40x year-over-year, that’s going to pull in all the manual labor as well. I just ordered my first humanoid domestic robot, expecting delivery in the next few months. That’s going to pull in every bit of manual labor over the next few years. It’s going to pull in math, science, [and] engineering grand challenges. My expectation is they’re all going to start falling, one by one.
AWG: If you remember DeepMind’s pretty miraculous achievement that resulted in Demis and and others winning a Nobel Prize in chemistry a few years ago, solving protein folding. The protein folding problem of starting from the structure of a protein and predicting its final structure, which has many, many biological and medical applications. Predicting the structure of a single protein used to be the topic of a five- or six-year biology PhD. And then, thanks in large part to AlphaFold 3, essentially all protein structures got solved overnight. It’s a bulk solution of a large chunk of structural biology that was just swept away by AI.
AWG: I think this model where AI bulk-solves fields essentially overnight, we’re just going to see this happen field-by-field-by-field over the next few years. Right now, the field that’s in the process of being steamrolled is math. So if you’re following math closely right now, there are various collections, repositories of open unsolved problems in math, most famously at the moment a set of problems that were identified by the Hungarian mathematician Paul Erdős. There’s a website that collects [and] curates open so-called Erdős problems. And pretty infamously now, in the AI and math communities, one by one all of the open unsolved problems that were curated in the Erdős problems list are getting solved by AI. This is happening literally day by day at this point. So this is, I think, the template of a near future that we’re going to find ourselves in.
Future Predictions: 5 to 50 Years
JW: I know you’re very well versed on a number of things. Paint a picture: what does the the future city look like in 50 years around the planet?
AWG: Oh gosh, 50 years? I mean that’s past every event horizon at this point. Ask me about five years.
AWG: 50 years... so maybe just as a provocative response, I would say 50 years from now — so we’re talking about 2075 — if we don’t have cities of humans uploaded into data centers, if we don’t have colonies throughout the Solar System... and maybe I have to self-censor a little bit in terms of some of the the wilder extrapolations, but I’d say if we don’t have humans both in the cloud and in the clouds, then something has horribly gone wrong. And, by the way, John asked me [about] 50 years from now. I would probably render the same prediction 15 or 20 years from now.
Ontological Shocks: Solving Science & Medicine
JW: What are some things that you know that you think are most shocking that are going to play out in the next 48 months?
AWG: Expect a lot of ontological shocks from math, science, engineering, and medicine being solved. We don’t have, really, any precedent for entire domains getting solved at once. There are probably, I would expect, grand discoveries [and] new technologies that AI is just going to solve effectively overnight that will be very surprising to the vast majority of humanity.
AWG: And we’re accustomed to slow progress. We’re accustomed to sort of a smoothing of mini-singularities when we go from not knowing something to knowing something. We have really no cultural precedent for making a lot of discoveries and a lot of inventions at once. And I think, naively, I would expect – unless we engineer our societal structures and our governance appropriately – we’re going to have a bit of indigestion as a human civilization.
AWG: What happens if, for example, the top 5,000 diseases get cured overnight? We don’t have the governance in place to run 5,000 clinical trials for 99%-guaranteed drug candidates to solve diseases. It doesn’t exist right now. So we’re going to have to, I think, start to reorient the way governance [for], and our ability as a human civilization to metabolize, new discoveries [and] orient all of that to be able to accommodate bulk discovery and bulk invention.
The Innermost Loop & Dyson Swarms
JW: Let me ask you this. You follow all the top tech companies. How would you grade them? How are they doing? What would you like to see them doing more of?
AWG: There are so many companies. It’s sort of a trick question because, companies in what sense? We have companies that are large by market cap, companies that are doing the most innovative research. I would say the companies that are going to have—if we grade for example by highest marginal impact [on] the future—I think there’s no question that the companies that are having the largest marginal impact are those with the strongest AI models right now.
AWG: I’ve spoken here and there about this idea that our civilization has what I call an “innermost loop.” In computer science when you’re you’re trying to optimize the performance of a computer program, there’s this notion that because many programs are iterative in nature and repeat bits of code over and over again in so-called loops, that if you want to make a program much faster you should focus on looking at the innermost loop in order to optimize that, because that’s the code that’s going to be executed the most frequently.
AWG: I think the concept generalizes to civilization and the human economy. I think there is, at this point, for the first time in history at least – to my knowledge – for the first time in history, an innermost loop to civilization. And that looks approximately like robots that are helping to build fabs, that are building chips that go into data centers—AI data centers specifically—that are being used to train models that are then being used to guide the robots. The loop is complete. And so the companies that are having the outsized impact are the companies that are in that innermost loop, that are accelerating it in one way or another.
AWG: Then there is energy also, of course, throughout the process. So I get very excited about companies that are making energy post-scarce. We talk all the time about energy: nuclear fission, nuclear fusion, solar, space-based solar [energy] for data centers. I think that’s going to have a humongous marginal impact.
AWG: We talk about space-based data centers a lot. I think at at some point – it’s very likely in the next 5 to 10 years – many futurists have over the years sort of jokingly talked about a time when it turns out to be more economically feasible to deploy AI data centers not just on land but in space. And not just in Earth orbit, but also in solar orbits. And not just a few data centers, but a lot of data centers. And not just a lot of data centers, but so many data centers that we would have to, in extremis, disassemble the rest of our Solar System in order to build the data centers. This is the so-called Dyson Swarm concept.
JW: Not [a] Dyson Sphere?
AWG: Not [a] Dyson Sphere. This is Freeman Dyson, the physicist. [The] Dyson Sphere is this idea that there’s a solid sphere that we’re going to build maybe some somewhere around the habitable zone that lots of humans could live on the inside. This is probably an impractical concept. A Dyson Wwarm is this idea of taking apart large planets, like Jupiter for example, and turning that matter into a flying swarm of lots of orbiting computers.
AWG: So some scenario like this, I think, is looking increasingly likely. I’ve joked, in [the] past, that maybe the Moon is target number one. There’s growing interest in disassembling at least in part the Moon in order to build data centers in low Earth orbit and GEO. It’s a very attractive target if you’re going to look for mass that’s already not completely far down in the Earth’s gravitational well.
AWG: I think companies that are advancing us toward the Dyson Swarm vision as one possible extremal limit of this innermost loop, I think that’s pretty exciting and is going to have an enormous marginal impact over the next five to 10 years.
JW: Ladies and gentlemen, Alex Wissner-Gross.



Yesterday, the bottom fell out of software stocks. Those of us who religiously listen to Dr. AWG were warned a month ago that 'meat-written' software was 'cooked', and those of us who listen CLOSELY to the Great Lobster, took his advice and got out of all software stock. Another reason to follow every word uttered by this genius. Thank you AWG!
Curious what your thoughts are on the effects on our ocean tides if the moon were disassembled? Seems like that could have unintended consequences.