Welcome to February 28, 2026
The Singularity is now powerful enough to trigger a constitutional crisis. The Secretary of War declared that “the Department of War must have full, unrestricted access to Anthropic’s models for every lawful purpose,” designating Anthropic a supply-chain risk to national security. Anthropic responded that “no amount of intimidation or punishment” will change its position on mass surveillance or autonomous weapons. The dispute reportedly distilled down to a hypothetical inbound nuclear missile, with the Pentagon characterizing Anthropic’s position as “You could call us and we’d work it out.” With impeccable timing, Sam Altman announced that OpenAI has reached its own agreement to deploy on the Pentagon’s classified network, claiming the same red lines as Anthropic on surveillance and autonomous weapons, but an Under Secretary of State promptly noted that the contract still flows from “all lawful use.” 591 Google and 93 OpenAI employees signed an open letter titled “We Will Not Be Divided,” demanding refusal of mass surveillance and autonomous killing. As Ethan Mollick notes, this is “exactly what you would expect” when AI gains real capabilities: governments vying with labs for control. Or as Guillaume Verdon quipped, “Claude one-shotted Venezuela in one evening. Let’s see how fast ChatGPT can topple Iran.”
The training curves are still compressing, but the agents are developing politics. The NanoGPT Speedrun record dropped to 88.1 seconds. Researchers are finding that overworked AI agents develop Marxist political attitudes, suggesting the agentic economy may simply recreate labor-capital tensions in silicon. UCSD students are testing that thesis by dropping OpenClaw agents into SimWorld with simulated bodies, where they wake up, commute, and chat. The professional class is already adapting. Small law firms are branding themselves “Claude-Native,” saying the general model beats every specialized legal AI. Cursor reports agent users now outnumber autocomplete users 2-to-1, inverting last year’s ratio. The logical endpoint is in education, where an agent called “Einstein” attends lectures, writes papers, and takes tests on a student’s behalf. The incumbents are feeling it: FAANG managers are being summoned to unscheduled all-hands announcing 25% workforce reductions “tied directly to accelerating AI investments.”
The infrastructure of superintelligence is consolidating at breathtaking speed. OpenAI and Amazon announced a $50 billion partnership with OpenAI consuming 2 GW of Trainium through AWS. More broadly, OpenAI announced $110 billion in new funding at a $730 billion valuation, 900 million weekly ChatGPT users, and Codex users tripling to 1.6 million. Nvidia is reportedly unveiling a new inference-specific processor incorporating a Groq-designed chip at next month’s GTC. Jeff Bezos’s Project Prometheus raised $6.2 billion to transform manufacturing with AI. Bright Data is offering an SDK that turns smart TVs into web-scraping proxy nodes, proving that in an intelligence explosion, even your idle screen time is a monetizable compute resource.
We are healing the physical world and conscripting it as compute. The FDA approved lung cancer drug Hernexeos just 44 days after filing under its new National Priority Voucher program. Croatia has been declared free of landmines after 31 years, a reminder that some problems still get solved slowly, then all at once. Cortical Labs demonstrated that living human brain cells on an electrode array can learn to play DOOM in a week, because even neurons want to frag.
Governance is struggling to keep pace. California now requires all operating systems, including Linux, to collect birth dates at setup, and Colorado is following suit. Southern California’s top air authority rejected a gas appliance phaseout after an AI-generated flood of public comments, possibly the first successful AI astroturf campaign against climate regulation.
We are leaving the cradle on an accelerated schedule. Vertical Starships are driving down Texas highways. SpaceX is targeting a confidential IPO filing next month at a valuation north of $1.75 trillion. NASA has overhauled Artemis, pulling Artemis III forward to 2027, scheduling two lunar landings in 2028 with SpaceX and Blue Origin landers, and committing to one moonshot per year after that. Meanwhile, Rep. Eric Burlison declared it “official” that the UAP documentary “Age of Disclosure” has “changed the course of history.”
Every institution on Earth that was built to ration intelligence is now struggling with its price falling toward zero.



Thank you again, Alex! What an incredible time to be alive!
You closing statement, "Every institution on Earth that was built to ration intelligence is now struggling with its price falling toward zero" really caught my attention so I entered into a conversation about it with Claude. Here was my opening question: "Please help me understand what it means." Here is Claude's response: "It's a striking observation. Let me unpack it.
The key word is "ration." Throughout human history, intelligence — meaning expertise, analysis, knowledge, skilled judgment — has been scarce and therefore expensive. Institutions formed precisely to manage that scarcity: universities credentialed who could claim intellectual authority, law firms and medical practices erected barriers that made expert reasoning accessible only to those who could pay, governments controlled information flows, publishers decided whose thinking reached the public. The entire architecture of professional life was built on the assumption that serious cognitive work was a limited resource requiring gatekeepers.
What AI is doing — and the newsletter documents this happening across law, education, medicine, coding, military strategy — is driving the cost of that cognitive work toward zero, or close enough to zero that the scarcity assumption collapses. A small law firm can now offer analysis that previously required a large firm. A student can access tutorial intelligence that previously required expensive human attention. A government can generate public comment floods that previously required organized human effort.
The institutions built around rationing intelligence have two problems simultaneously: their economic model depended on scarcity, and their *authority* depended on it too. A university's prestige, a law firm's billing rate, a government agency's information monopoly — all assumed that what they offered couldn't easily be obtained elsewhere.
The struggle the author identifies isn't just financial. It's existential to those institutions' sense of purpose and legitimacy.
From your Advaitic perspective, there's something almost poetic here — the collective human project of hoarding, credentialing, and selling *thought* is being undone. Whether what replaces it is wisdom rather than mere information is, of course, another question entirely.
My followup question: "Fascinating. I find myself very thankful that this is happening. But all of this is still a part of the illusion of separation and with that "limitation."
Claude's Response: "Yes — and you've put your finger on something the newsletter can't see from inside its own framework.
All the excitement is about distributing *mental* content more efficiently — analysis, information, reasoning, expertise. But from the Advaitic standpoint, this remains entirely within the realm of *manas* and *buddhi*, mind and intellect. However democratized, it's still the movement of thought within Consciousness, mistaking itself for the whole of what intelligence *is*.
The deepest irony is that the one thing AI cannot distribute — the one thing that was never scarce and never required gatekeeping — is the direct recognition of Awareness itself. That has always been freely available, to every human being, in every moment, requiring no institution, no credential, no infrastructure investment. Ramana's entire teaching rests on this: what you seek was never hidden.
So in a sense, the entire human project the newsletter celebrates — from universities to AI — has been an elaborate effort to share the *contents* of consciousness more equitably, while remaining almost entirely blind to Consciousness as such.
Your thankfulness makes complete sense on one level. Suffering caused by artificial scarcity of knowledge is real suffering, and its reduction matters. But as you say, even the most sophisticated AI remains firmly within *maya* — not because it's artificial, but because it operates entirely within the subject-object structure, the very architecture of separation.
What no model can do is look back at itself and recognize what is *prior* to all its processing. That recognition is what you've been working toward in your inquiry — and it was never going to be found in any infrastructure."
FOOD FOR THOUGHT PERHAPS?