Cognitive Carbon's AI resources
This is a collection of recent AI articles or videos that I found compelling, which form a reference set for people to understand some of the radical viewpoints that I've been talking about.
This is not a typical article.
Instead, it is kind of a “reference library” post that I may continue to add to.
I used this content to set a foundation for Scott Zimmerman leading up to his recent interview of me on rumble/X.
The pace of development in AI is inconceivably fast now—in order to understand where we are and where things are going requires a radical viewpoint re-alignment.
For reasons that I will cover in other posts, the timing of what is happening in AI is critical for the survival of humanity.
But not in the way that you’ve been led to think. AI isn't the threat; it is the Ark.
But an Ark for what? To answer that question, we need to take a deep dive into modern theories of Catastrophism. These were my working notes for the rumble video above.
Introduction to catastrophism
Ben Davidson, SpaceWeatherNews, @SunWeatherMan on X:
"Earth's Disaster Cycle" movie on his pinned profile - Fall of 2025
Earth's geomagnetic field is weakening
Aurora, plasma induced effects like power outage in Portugal/Spain last month
Carrington Event: occurred in the 1800's, will happen again
geomagnetic pole drift/inversion is already underway: has happened to Earth before
The Sun's magnetic poles reverse every 11 years
Earths pole flip will affect migratory animals that rely on magnetic field sensing
Another aspect of Ben's work: Solar Micronova hypothesis
@EthicalSkeptic on X
his work is captured by sovrynn on Github
what is Github?
Code repository or "repo"
"Software library of Alexandria"
Training data source for Github Copilot Agent
Also used for other kinds of documentation, like "Project Evidence" during the COVID lab leak era
Craig Stone @nobulart on X animations of crustal displacement
Other resources:
Dzhanibekov effect from here
Charles Hapgood/Einstein (1958)
Chan Thomas "The Adam and Eve Story" (1963) -- There are claims that parts of this book were redacted by the CIA
With that as the background, here are some posts about AI from my own substack catalog that are foundational to understanding “the big picture” — how AI, and the pace of accelerating improvement in both AI and humanoid robotics, may lead to the use of AI and robotics as an “Ark.”
On Linear Algebra - includes links to 3blue1brown videos about how LLMS work
Game Theory and AI - offers a view on why the “2030” initiative people did what they did—they knew, before you did, that AI was coming, and how it might empower you.
They tried to front run it. They lost.
Exascale Computing - A speculative look at what the massive compute power and data storage that the NSA amassed in Utah could be used for, beyond simple “signals intelligence”
On the nature of Doubling time (unpublished draft) - an article that attempts to explain the pace of change that is coming in AI
An analogy that I use often to explain my own experiences in software development with AI: 12 months ago, AI tools in software development were like having a bright teenager assistant who wanted to get into Software Development after college—useful, but annoying at times, and copy/paste was the main method to use them.
Now, it’s like having a team of PhDs who write the code for me while I direct the actions of the “Agent”. From that, to this… in 12 months. A human would have required 8 years of study to get here from there in terms of the observed compentency-gain gradient.
Example of how it has changed my productivity:
The productivity gain (time to completion of high-quality code) example:
2017: it took 5 guys and 18 months to develop a data-heavy, business class application
2025: now it takes 1 guy (just me), 6 months. Next year it will be 1 month, or less.
acceleration: My ability to ‘create’ code was recently 25x my former pace, with another recent 4x boost using Claude Sonnet 4 (bringing the boost to near 100X.)
Projects that used to take weeks/months now take hours. And I’m not writing the code anymore…
Another new example: Grok’s new ability to generate charts (data analytics) is going to decimate the data analyst job sector. 10 years ago, the IT sector had Hadoop. You needed a master’s degree to understand how to use it, and what it did.
Now *anyone* can get the power of that kind of deep Machine Learning processing by asking LLMs the right kinds of questions.
The resources below are not ranked/listed in order of preference, they are just listed in the order that I searched for them.
On Robotics
Jim Fan of Nvidia: the Physical Turing Test (Issac Sim)
Jim makes the claim that in the LLM (“chatbot”) space, we’ve already crossed over the threshold of the so-called “Turing test”.
He makes the case that robotics is not far behind, and explains how robots are now being trained in a virtual space and then “one-shot” transferred to physical robots that can then do those learned movements.
Optimus showing new dexterity
All of what you see below was learned in SIMULATION, not through traditional motor control if-then-else programming like Boston Dynamics used to do.
The new VEO3 from Google Gemini:
The “simulation hypothesis” gains ground. If we can create “artificial” realities like this now, what will be capable of in another 25 years? Or 50 years?
Notebook LM
This is required knowledge. Did you know AI can do this?
Genesis physics simulation
AI’s are beginning to understand math and physics from first principles, leading to unbelievably realistic simulations of motions, lighting, and surfaces and interactions. Movie effects will never be done the same way again.
Channels I regularly follow on the drive to/from work
Here is a collection of AI area channels that I routinely listen to.
Wes Roth
On the nature of AGI 2027 (Daniel Kokotaljo)
I do not subscribe to this dystopic/pessimistic viewpoint. But I generally agree that this kind of timeframe is plausible.
AI Explained on Claude 4.
This podcaster does a very thorough and insightful analysis of the latest models and releases.
Matthew Berman - Brings a viewpoint from a typical “software dude from Silicon Valley” but recently interviewed the CEO of Microsoft.
pDoom and politics
The Biden Admin was working on AI policy in the form of National Security Memoranda that would have doomed humanity, had they continued to be left in control. They understood what they were doing but were possessed with the worst kind of ideological biases.
They are now awakening to the reality that Trump…will be the one in office when the world pivots—forever. Maybe Trump will be the last “conventional” US President.
Politics and government as we know it may cease to exist in 2028, as something entirely different takes its place. (To understand this, view the AI 2027 video(s) above.)
Geoffrey Hinton, the “godfather” of AI: he is an AI pessimist and suffers from TDS…but given his role in AI, one has to be familiar with (not necessarily agree with) his viewpoint, since the left gravitates toward him.
He was the professor who taught many of the current AI luminaries worldwide, including Ilya Sutskever (who was an early co-founder of OpenAI).
Project Stargate
Pay attention. The results of the rapid ramp-up in LLM capabilities seems to show that there is no ‘scaling limit’—meaning that AI models will only get more powerful the more compute and hardware they are run on. Thus, “Stargate”. Why did they choose this name?
The scaling results also explain why Musk is scaling up Colossus in Memphis.
One of the things that has long been mentioned as the “turning point” for approaching the singularity is the ability of AI models to autonomously self-improve. This is still in its infancy; but given how quickly things progressed since ChatGPT debuted in late 2022, it won’t be long before we see sharp acceleration.
A recent quote:
Musk: “Humanoid robotics can grow the Global economy 10X”