Welcome to May 8, 2026
The Singularity is now requisitioning orbital real estate. Anthropic just signed a partnership with SpaceX handing it the entire Colossus 1 data center, unlocking 300+ MW and over 220,000 NVIDIA GPUs within the month, doubling Claude Code rate limits and killing peak-hour throttling for Pro and Max users. SpaceXAI confirmed the deal extends into “multiple gigawatts of orbital AI compute,” because terrestrial power, land, and cooling no longer match the cadence required, and SpaceX is the only outfit with the launch economics and constellation experience to make space-based compute a near-term engineering program rather than a research concept. Anthropic Chief Compute Officer Tom Brown summarized the play as “moving a lot of atoms,” ideally off-planet, citing nobody better at the task. Elon Musk vouched for the Claude team after a week onsite, noting “no one set off my evil detector,” and at the same time shut down xAI as a separate company entirely, with Anthropic moving into Colossus 1 just as SpaceX’s freshly-absorbed AI lab decamped for Colossus 2. The demand fully justifies the orbital pivot. Dario Amodei revealed Anthropic grew 80x annualized in Q1 against a planned 10x, with compute unable to catch up to the sheer extremity of growth.
The capital markets concur. Anthropic’s pre-IPO valuation just hit a record $1.2 trillion in onchain pre-IPO trading, up another 20% in seven days and up 900% since October, and naive ARR extrapolation has Anthropic absorbing 100% of global GDP in 21 months, absurd until you recall the product is cognition itself.
The models keep earning the spend. Opus 4.7 took the top spot on Scale Labs’ new Refactoring Leaderboard at 48.57, beating GPT-5.5 Codex on refactoring production-scale repos. Anthropic also unveiled Model Spec Midtraining, letting models study their own values before alignment fine-tuning, essentially reading the syllabus before the exam. The harder ProgramBench asks agents to rebuild full codebases from a binary alone, where Opus 4.7 leads at 3% “almost resolved” and 0% fully solved, a humbling reminder that the ladder still has rungs above us.
Agents are also training themselves overnight. Anthropic launched “dreaming” in Claude Managed Agents, a scheduled process that reviews session histories and curates shared memories across teams. Search is leaning on humans the other way. Google AI Overviews will surface more first-hand Reddit and expert-blog accounts, while Chrome has started quietly installing 4 GB of Gemini Nano on every desktop with available storage.
The silicon underneath is being violently reorganized. Enthusiast PCs are footing the bill, with motherboard sales collapsing over 25% as wafers redirect to AI accelerators. Musk’s Terafab in Texas is projected to cost $55 to $119 billion across phases, while Arm doubled its AI-chip guidance to $2 billion of 2027-2028 sales just one month after launch. Nvidia is putting $3.2 billion into Corning for three new US optical-fiber plants, because copper has run out of bandwidth. Riding the protocol layer above the new glass, OpenAI, AMD, Broadcom, Intel, Microsoft, and Nvidia jointly open-sourced MRC, a multipath protocol that keeps GPUs synchronized across cluster failures.
The buildout is redrawing physical geography. The European Commission is weighing rules restricting US cloud platforms from processing sensitive government data, naming sovereignty as the next constraint after compute. Even lidar is having a second act beyond robotaxis, now babysitting 800-foot wind turbines and 1,500-ton shipyard gantries. And Texas just passed California in utility-scale solar capacity, quietly inverting the geography of clean energy.
Bodies are getting upgraded in parallel. Neuralink’s surgical robot is being rebuilt to reach any brain region, aiming for a generalized neural interface to every condition originating there, generalizing the implant the way Anthropic generalized cognition. Meanwhile, Amazon Pharmacy Kiosks will start dispensing Novo Nordisk’s Ozempic pill, because the future of metabolism is a vending machine on the corner.
Finance and statecraft are repricing in tandem. Morgan Stanley launched crypto on E*Trade at 50 bps, undercutting rivals on price. South Korea’s stock market overtook Canada’s as the world’s seventh largest, propelled by AI silicon demand. Washington and Beijing are weighing official AI talks at next week’s Trump-Xi summit, hoping to keep the digital arms race from going kinetic.
The skies are being unsealed too. The Department of War launched PURSUE, the new Presidential Unsealing and Reporting System for UAP Encounters, a coordinated records release covering tens of millions of documents across decades and dozens of agencies, with new declassified tranches dropping every few weeks per the President’s historic directive to publish all “Government files related to alien and extraterrestrial life.”
The truth may be out there, but so are the next data centers.



AWG I'm watching the latest episode and it's fantastic. Haha you and DB2 definitely both took the high road showing real self restraint to some ridiculous statements and someone who is clearly totally Ai illiterate. I look forward to seeing the new Kotler Bench which I'm sure will set us all clueless Ai researchers straight 🙄🤣. Haha and you were right of course you're right, don't feed the trolls.
The browser detail is easy to miss, but it may be one of the more important enterprise implications here.
AI is moving from a separate destination to an embedded layer inside everyday tools. For consumers, adoption can happen almost invisibly. For enterprises, that same convenience creates a governance challenge.
Most companies cannot allow every new embedded AI capability into the operating environment by default. They have to manage data exposure, security, compliance, and accountability.
So the bottleneck may not be access to AI. It may be how quickly an organization can decide what AI can touch, how outputs get checked, and where human judgment still owns the outcome.
That creates a window for employees. Enterprise adoption may move slower than the technology itself, but that is not a reason to wait. It is a chance to get fluent now, experiment responsibly, and learn how to use AI before it becomes a normal part of the job.
The advantage goes to organizations that build safe speed, and to employees who actively learn these tools both inside and outside their current workplace.