We are underestimating AI: I speak from direct experience
This article was originally posted on X, inspired by a 'wiseguy' comment telling me that LLMs are "all a grift". No, they aren't: they are extraordinarily capable, even while they are still evolving
This article is a response to the armchair quarterbacks who are dismissive of what I've written about AI and Large Language Models (LLMs), knowing little themselves besides what other people (or Hollywood and the media) tell them to think about AI.
These types diddle with ChatGPT asking it trivia questions and arguing with it on various soft subjects—wherein the opinions, ideologies and the narratives of those who curate the AIs training databases help to decide its current "knowledge"—instead of tasking AI to do real work in fields it excels at.
Sometimes these armchair experts are consumed by their own arrogance, instead of viewing things with humility and clarity.
To all I say: Pay attention. AI is gaining fast, and its current weaknesses will be addressed.
Recently, in the 2025 International Mathematical Olympiad (IMO), two large language model (LLM)-based AI systems achieved gold medal-level performance by scoring 35 out of 42 possible points each, solving five out of six math problems perfectly but earning zero points on the notably difficult Problem 6. This score would have placed them among the top human contestants, as 45 participants also scored exactly 35 points.
There are a handful of people out of 8 billion on this planet who are capable of that level of performance. This year, humans are still dominant in this field; next year, perhaps not. Ever again.
Here's an example problem:
A line in the plane is called sunny if it is not parallel to any of the x-axis, the y-axis, and the line x + y = 0. Let n ⩾ 3 be a given integer.
Determine all nonnegative integers k such that there exist n distinct lines in the plane satisfying both of the following:|
• for all positive integers a and b with a + b ⩽ n + 1, the point (a, b) is on at least one of the lines; and
• exactly k of the n lines are sunny.
Can you do math at that level, armchair AI expert? I can't (anymore.)
Below is a brief summary of my background in engineering, and through that, how I've come to have the perspective that I do with LLMs in software.
I started writing software more than 40 years ago.
In high school, I wrote code in BASIC and x86 assembly language before the "IBM PC" was even a thing. In high school, I wrote "machine language" to compute the digits of Pi out to thousands of decimal places.
In college, I learned FORTRAN for engineering, and I wrote a real-time graphical data display application in Turbo Pascal for an automotive parts manufacturer's transmission-case molding machine.
I graduated Magna Cum Laude with a Bachelors in Electrical Engineering and also studied Physics and Optics from one of the nation's top undergraduate colleges.
After I graduated, I went to work for TRW Space and Defense, becoming a hardware designer working on Earth Observing Satellite platforms for NASA contracts; I wrote code in UNIX shell script and in Perl to run diagnostics on the hardware I built.
After that, I joined a supercomputing startup in the early 90's, where our small team of Caltech wizards designed massively parallel processing systems that had thousands of cores.
Our machines were the fastest supercomputers on Earth for a time back in 1995-1997, and one series of them — the GeneMatcher II — helped sequence the DNA for the first whole Human Genome back in 2000 (Celera Genomics/Craig Venter.)
One of my colleagues from back then went on to eventually work at NVIDIA, building inference engine GPUs for self-driving cars (Tesla.) Nvidia is now the global supplier of the most advanced AI chips used by xAi, Tesla, Open AI, Anthropic, and more.
I wrote software back in the late 2000's for data analytics (SQL data warehouse applications, for those familiar) and eventually got into creating web applications using my knowledge of the "full stack" and leveraging my problem solving skills as a top tier engineer.
For a while I had a small team of 4-6 programmers working in a small software startup company; we built enterprise web applications for large HR departments and school districts.
Back then, one of the larger projects that we took on required about 4-6 programmers and it took about 18 months to develop. I had a people who specialized in "front end" design (web page construction and graphic design); I had others who specialized in "middleware" or "backend" software written in Visual C++, that fetched data from a database and did processing for various business logic needs; and others who specialized in database administration and server deployment.
In 2024, I designed and created a new business application for a property management company to streamline operations, using my own full-stack skills now augmented by AI.
I used ChatGPT in cut-and-paste mode at first to help with small bits of code; then I switched to an extension for Visual Code that let me highlight sections of code to cleanup, write from scratch, or ask questions about using the AI directly in my editor.
About six months ago I started using GitHub Copilot in Agent mode, in which the AI now takes control of the files and tools in my software editor environment, and automates almost all of the process of writing and testing software.
The agent now writes and modifies the code, and even writes scripts to deploy code to Linux-based production servers. By itself. I now design and direct, rather than sling code. I'm more of an architect, letting the AI cut the wood and hammer the nails.
With AI—which has evolved in skill over the last 18 months from being the equivalent of a smart teenager who wants to code someday professionally to a team of masters/PhD level experts in software engineering—I can do today, by myself, projects that use to take 4-6 people 18 months working together (or about 108 man months) in less than 6 months—all by myself.
Every few weeks lately, the pace of productivity doubles again, as the tools become more capable and make fewer and fewer errors. The AI agent I use now has direct access to inspect and modify my SQL database tables, so it can diagnose query issues all by itself, and fix bugs on its own.
It writes solid, production ready code better and faster than any junior or even seasoned programmer I've ever hired.
At my current age, I should be 'retired' from software; but instead, I am creating software at a pace and level of quality that exceeds the best I could have done back in my 20's and 30's.
AI has revivified me; re-ignited my creative spark by removing technical obstacles that used to stall my progress for days, weeks, sometimes months.
On top of that, I bought a used 2018 Tesla model 3 a year ago, and I commute back and forth to work using Tesla's Full Self Driving (FSD) package.
An AI drives me to work and back.
I share all of this to say: when someone who didn't even know what AI was until 2022, and didn't start using ChatGPT or Grok until late 2024 or 2025, and still doesn't understand how it works and what its current limitations are...that someone should rethink whether it's wise to try to tell me what AI can or cannot do, or that “it’s all a grift”.
Telling me that I didn't actually experience what I did, and didn't see what I personally saw and don’t do what I personally do each day is the ultimate in rhetorical failure.
It hits the same as when a flat-Earth proponent tries to tell me that the images that I saw come off of the NASA and GOES satellites myself (having built the hardware to receive the signals directly from space, and rasterize those images) didn't really show the Earth was round; no, you see—those images were all fake, and I was "misled" by "them" —they who were somehow magically altering the images in real-time so that I would mistakenly see a round Earth instead of a flat one.
That dog doesn't hunt with me.
The facts: I use AI every day on the particular use case that it is currently the best suited for (software development) and I can tell you, categorically: it has already changed the entire world. Irreversibly so.
That change accelerates every day, and most people have NO IDEA where this technology will take us in the coming years. The software industry as it used to be is already a dead man walking, and most haven't woken up to this reality yet.
But still, there are "armchair experts" who like to pontificate about AI without any of the experience in using it as I have.
Pay attention. We (well, you) are underestimating AI.
CognitiveCarbon’s Content is a reader-supported publication. To support my writing and research work, please consider becoming a paid subscriber: at just $5 per month, it helps me support a family. It is genuinely needed.
You can also buy me a coffee here. Thank you for reading!



Helped my understanding. Never used AI. Would like to analyse Prostate Cancer related stuff but don't know where to begin.
You are brilliant and this article is a wonderful piece to counter the bogus AI claims and debunk some of the self proclaimed “experts” out there. With wisdom comes discernment. Be wise in your claims, anons.