Welcome to May 15, 2026
The Singularity has begun optimizing its own optimizer. Poetiq turned its “Meta-System” loose on LiveCodeBench Pro, let it build its own harnesses, and hit a new SOTA of 93.9 atop GPT-5.5 with “no fine-tuning, no special access, no hand-built pipelines.” Prime Intellect handed Codex and Claude Code its idle compute to attack the NanoGPT Speedrun optimizer track, and after some 14,000 H200 hours both agents beat the human baseline, with Opus 4.7 now holding the record at 2,930 steps. The architecture is learning to settle down too, as new “Attractor Models” let one module propose embeddings and another solve for the fixed point, taming Looped Transformers enough for a 770M model to outrun a 1.3B one on twice the tokens. Raw scale refuses to tap out, as Datadog’s open-weights time-series foundation model Toto 2 keeps improving with no saturation at 2.5B parameters. Training is escaping Nvidia. Zyphra’s ZAYA1-8B, the first MoE trained end to end on an AMD Instinct stack, wrings frontier intelligence from every active parameter. The bar for competence keeps rising. Mechanize’s GBA Eval asks a model to write a Game Boy Advance emulator from scratch in 24 hours, and GPT-5.5 already clears it 53.2% of the time.
As the machines get better at the work, institutions scramble to govern the output. The arXiv will now hand one-year submission bans to any author caught shipping AI-generated plagiarism, fake references, or errors they plainly never checked, since automation still rewards proofreaders. The convenience keeps deepening regardless, as OpenAI has put Codex inside the ChatGPT mobile app to dispatch work across your laptops and devboxes, and shipped a personal-finance preview letting US Pro users link accounts and interrogate a dashboard of their spending. The same capability carries a darker dual use, and Palo Alto Networks warns of a narrow “three-to-five-month window” to harden systems before models like Anthropic’s Mythos and OpenAI’s GPT-5.5-Cyber make exploiting unknown vulnerabilities routine.
All of it has to run somewhere, and the silicon supply chain is rearranging in real time. Washington has cleared around ten Chinese firms to buy Nvidia’s H200, its second-most-powerful AI chip, even as Apple reportedly begins producing legacy iPhone, iPad, and Mac processors at Intel on its 18A-P node. Investors are funding the buildout too, with Cerebras closing up 68% in one of the largest US tech IPOs in years, at a $95 billion market cap. The neighbors are a tougher sell, as a new Gallup survey finds 7 in 10 Americans would oppose a data center near their home, with opposition so strong that more would rather live beside a nuclear plant than the warehouses powering the boom.
If the data centers cannot find room on Earth, the plan is to leave it. Jensen Huang says computing will soon demand 1,000x more energy than humanity now generates, to which Elon Musk replied that “Space is the only way,” a nod to the Dyson Swarm now treated as a roadmap, not a thought experiment. Closer to home, Verizon, AT&T, and T-Mobile are forming a joint venture using direct-to-device satellites to erase nearly every rural dead zone and keep networks alive through disasters. The skies are turning transparent in another sense. CIA whistleblower James Erdman III, who led the ODNI’s Director’s Initiative Group probe into UAP, testified to the Senate that the agency illegally monitored his investigators’ communications with whistleblowers, while Rep. Burlison says the White House will soon issue a UAP memo compelling agencies, under stiff penalties, to release any and all information.
If the truth is getting easier to disclose, value is getting harder to judge. A social experiment posted Monet’s genuine “Water Lilies” to X labeled as AI art, drawing earnest complaints that it had “some spark missing” and would not survive beside the real thing, proof that the uncanny valley now lives mostly in our heads. The capital flows are less ambiguous. Anthropic has agreed to terms on a $30 billion raise valuing it at $900 billion, and separately pledged with the Gates Foundation $200 million over four years toward AI public goods in health and education. Governments want their cut, with California’s Gavin Newsom pitching a 7.25% tax on cloud software just as Bitcoin pushed past $80,000 after the Senate advanced the Clarity Act, handing the CFTC primary oversight of crypto. Not every deal stays friendly, since OpenAI is weighing legal action against Apple for burying ChatGPT inside Siri, even as its trial against Elon Musk winds down. The stakes are ultimately geopolitical, and Anthropic’s new paper on US-China AI competition sketches two versions of 2028, one where democracies defend their compute advantage and set the rules, and one where export-control loopholes hand that role to authoritarian regimes.
In the race to 2028, it’s the compute, stupid.



Despite the ongoing discussion on AI generated content, at the end it’s about the product itself, regardless who produced it. If the music or the movie is touching, if the book story is engaging, if the services makes things easier, it will be adopted by the public. It’s just this transition period where we have reasonable doubts. Anyone has any imagination what a 1000x more compute really mean? The race is on and ALL (including Alex) humanity worries on speed as neither governments nor companies and surely not the public is able to cope with it. That’s the downside.
The Monet experiment was fun. AI Placebo effect. I bet if they made up a fictitious artist and created an actual AI picture of the painting, people would think it was much better than the real picture that they thought was AI generated. Would demonstrate how inaccurate opinions are, despite the opinionated averring to their death that their opinions are fact.