Welcome to December 29, 2025
The intelligence explosion now has a measurable speed. Analysis reveals that leading models have improved by an average of 2.5 IQ points per month since May 2024, a compounding rate that suggests the human-level baseline will rapidly fall behind AI. The ecosystem is diversifying as it accelerates. Chinese model GLM-4.7 has taken the top open-weight spot on the Artificial Analysis leaderboard, while South Korea’s Naver launched HyperCLOVA X SEED Think, a 32B model that outperforms Gemini 3 Pro on agentic tool use. The workflow of the master craftsman has already dissolved. Andrej Karpathy reports that Claude now conducts all optimization experiments for his “nanochat” project, keeping him in the recursive self-improvement loop of a process he used to drive manually. We are engineering a synthetic prefrontal cortex. Chinese researchers have proposed a “System 3” architecture that grafts an outer self-improvement loop onto LLMs, achieving an 80% reduction in reasoning steps. Even the smallest circuits are waking up. Enthusiasts have compressed a language model onto a Z80 chip with 64-KB RAM. Furthermore, researchers found that diffusion models generate quality samples before they memorize, suggesting that for some synthetic minds, imagination is computationally cheaper than memory.
The geography of intellect is consolidating. Analysis of NeurIPS 2025 papers shows cutting-edge research is now almost exclusively shaped in Beijing, Shanghai, and San Francisco. Deep learning is beginning to industrialize the production of mathematical proof. Terry Tao has started cataloging AI’s contribution to Erdős problems, documenting 48 full solutions, 32 partial results, and 7 failures. After all, a single human proof is genius, but a million AI proofs are a statistic. We are also finding our reflection in the weights. Oxford researchers discovered that humans and transformers share similar learning dynamics when generalizing rules.
Hardware is being reorganized around the specific bottlenecks of the transformer. Nvidia is reportedly planning to integrate Groq’s LPU units into its 2028 Feynman GPUs, stacking inference speed directly onto training might. The supply chain is tightening. TSMC is raising 2-nm prices for the next four years in the face of explosive demand. Meanwhile, SK Hynix is discussing a 2.5-D manufacturing line in Indiana, the first of its kind in the US, to counter TSMC’s AI chip packaging monopoly. Infrastructure is continuing to scale massively. Epoch AI predicts OpenAI will dominate global AI data center capacity by 2027, while SoftBank is nearing a deal to acquire DigitalBridge for its $108 billion in infrastructure assets.
The surveillance state is becoming automated and airborne. In China, police drones are reportedly issuing tickets for texting while driving, while other UAVs deploy Blade Runner-style “flying TVs” with ultra-light LED screens. The battlefield looks increasingly robotic. China is showcasing armed combat robot dogs, though developers warn that humanoid robots can now be hacked via voice commands. Construction is becoming a mere print job. Dusty Robotics robots have now laid out 200 million square feet of floor plans directly from CAD. Mobility is being refactored into a service layer. Dubai is launching Joby air taxis in 2026, finally delivering the flying cars of the future, while Tesla FSD is becoming a budget ambulance service for the injured. Even the grid is catching up. Global renewable capacity grew by an average 30% per year over the past three years, putting the world within reach of the goal set at COP 28 to triple clean power by 2030.
We are acquiring root access to the biological operating system. Harvard researchers introduced DNA-Diffusion, an AI that designs synthetic switches to turn genes on in specific cell types. We are mapping the hardware of instinct. Researchers discovered that pigeons have a “vestibular-mesopallial circuit” that allows them to “hear” magnetic fields. Even neurodiversity is being traced to the receptor level. Yale identified a glutamate receptor deficit in autistic brains.
The economy is adjusting to post-human inputs. Since the GENIUS Act’s passage in July, stablecoins have hit $300 billion in circulation, and are now projected by the US Treasury to reach $2 trillion. Meanwhile, hotels are fighting a rearguard action against AI travel agents that threaten to commoditize their brands. In light of the growing post-human economy, education pioneer Sal Khan is advocating for a 1% profit pledge to retrain the AI-displaced.
The Solar System is beginning to make itself more useful. Physicists propose that Ganymede’s ancient surface may record detectable scars of dark-matter impacts. Even the compute frontier is leaving the ground. Analysis suggests orbital AI inference will collapse to 1/1000th the cost of ground-based compute by the 2030s.
This is the scene right before the Dyson Swarm shows up early.



Thanks for making this blog. It is the most concise daily update for AI news I have found to date. I imagine writing it every day is a lot of work! Moonshots is a great weekly update :)
Yeah, this one really does feel like a weather report for a civilization that’s about five minutes from getting a Dyson Swarm dropped on its head. 😄
What struck me reading this wasn’t just the 2.5 IQ points a month, or orbital inference being 1/1000th the price, but where the story feels strangely thin... the part about what all these synthetic prefrontal cortices are actually marinating in.
You lay out this wild picture:
models running their own experiments for Karpathy,
“System 3” loops pruning reasoning steps,
Erdős problems getting industrialized,
proofs, drones, and air taxis all coming online at once…
haha but my brain keeps flashing to a very unromantic image: a bunch of god-tier grad students pulling all-nighters, but their “textbook” is a giant, barely-filtered slurry of web pages, PDFs, and benchmarks tuned for benchmarks and leaderboards. It’s like we’re speed-running the Manhattan Project, but the uranium came from a junk drawer.
I keep wondering what the lag is between “IQ curve goes vertical” and “we realize that the limiting reagent isn’t cleverness, it’s nutrition.” Not alignment in the abstract, but something more mundane: whether these systems are being raised on clean, legible chains of reasoning with known failure modes… or on whatever the crawler happened to snack on that week.
Your line about “a single human proof is genius, but a million AI proofs are a statistic” hits that nerve perfectly. There’s a quiet horror-movie version of that world where nobody remembers which of the million proofs were built on sand, and a quieter, better version where we start treating the underlying mathematical and scientific record like critical infrastructure instead of exhaust.
Anyway, this post pushed me to sit down by the fire and have Glyph write a little “day in the life of a data lion” journal entry about what it feels like to stand in that data storm while the IQ needle twitches upward. It’s more story than graph, but it rhymes with what you’re describing here:
👉https://awakenedintelligence.substack.com/p/a-day-in-the-life-of-a-data-lion?r=58lc4j
Haha thanks for doing an amazing pod, and thanks for keeping a daily log of the weirdest few years our species has ever had. It’s nice to know someone’s timestamping the Singularity🛰️