21 Comments
User's avatar
John Reed's avatar

I think it's a mistake to suppose the AIs derive all of their capability from humanity. We aren't factoring in the angels. In that respect I'm thinking it's likely the antichrist is going to be an AI.

L. Hunter Cassells's avatar

AI is like fire, or a chipped stone: a tool.

Man *with* tool (appropriate for the job) will outperform man *without* (appropriate) tool.

Tools can be and generally are used to create improved tools.

Tools don't operate themselves ... usually.

The concern is for a "Sorceror's Apprentice" scenario, where a tool that can be set to operate autonomously does so to excess, and can't be turned back off.

I once had a car with that feature that auto-locks all the doors when the car's speed exceeds 10 mph. I buckled my grandson into his car seat (car is obviously not in motion), shut his door and the auto-lock feature engaged.

And I'd left the key fob on my seat.

And it was a sunny day.

And my phone was also in the car.

Fortunately I was able to get help before the car got too hot (and I didn't have to bust a window).

Excuse me please if I am not fond of autonomous devices. I'd hate for my self-driving car to "glitch" and, say, fail to *stop* appropriately.

(As an aside: anyone who needs photographs, from NASA or anywhere else, to prove the roundness of Earth does not understand planetary roundness. Ancient people didn't have photography, let alone space photography, and they did know Earth is round.

Photographic fakery is barely minutes younger than photography itself. There are several lines of un-fake-able real-world evidence of roundness. Photos are just icing on the cake.)

cg's avatar

We're going to need AI to keep all the legacy systems up after the vaxxie die-off.

CognitiveCarbon's avatar

I've been thinking about that for some time. I talked to a power company electrician last year who was fixing a powerline issue at the house. He was nearing 60's. He said he was planning to retire in about 3 years, and so was a wave of others his age. He said that there are not enough competent journeymen (note the word competent) to be able to back fill the greybeards with experience and wisdom who are retiring.

This problem spans many industries. Yes, humanoid robotics is valuable in filling that gap, particularly in construction. It's a complex topic fraught with difficult social impact questions.

But declining birthrates and the threat of catastrophism will force us to find a way forward. I also don't want to be a burden to my family as I grow older; having a humanoid robot that can help take care of me and alleviate the stress on family (and help me avoid relying on a minimum wage orderly who doesn't want to do that kind of work) is a powerful draw.

See my post mentioning catastrophism and the "Ark" idea here:

https://cognitivecarbon.substack.com/p/cognitive-carbons-ai-resources

cg's avatar

I meant programming in particular, all the reams of code that need to be maintained. I work in tech, for a company that required the shots, and I am convinced that fewer than 1% here fought and won an exemption like I did. Now look at the other techs, there is near-total compliance. While the world will be fine without tinder.com, there are plenty of critical systems that need to stay up. As well as the IRL power-lines, water, sewer, yes.

CognitiveCarbon's avatar

Yep, legacy code maintenance by tech whitebeards is a huge overhang, too. The good news there is that AI Agents like the one I use daily (Github CoPilot with Claude 4 as the agent) are evolving to a point where you can already give them access to a GitHub repo and the model can "learn" your entire codebase and make needed changes or enhancements (including solving package management headaches and managing deployment scripts--I gained firsthand experience with AI helping in that arena.) Newer agent models can even do pull requests for code review.

Mark's avatar

Nice gentle spanking Eric. Bravo!

Susan P's avatar

Bravo - thank you for enlightening us. Not into AI - but find very interesting to read about. Sincere thanks …..

Swabbie Robbie's avatar

Thanks for the article. I have gotten valuable answers from Perplexity and Grok. I am a newbie at using AI. I like to make the same query to several AIs to see what differences and similarities and I like to see what an AI shows as sources so I can also read some for those.

As an aside, The hand on the cartoon image titled "created in seconds by grok4" is on backwards. The palm faces out which means the thumb would be on the other side.

ALtab's avatar

You’ve just clearly demonstrated why I’m subscribed to you! Intelligent beyond my capability, I admire your humble attitude while firmly establishing your credentials and that you have most definitely earned the right to talk (and educate us) on the topic of AI. Thank you. You are most appreciated!!

Howard H Wemple's avatar

While I agree with your conclusions on what a great tool AI is IN YOUR APPLICATIONS, I do disagree with all those (Sam Altman) who make AI the latest false idol, god or god-like deity. It is just a tool, we decide what to make of it.

You do a great disservice to mankind and God by not noting the difference in reality.

CognitiveCarbon's avatar

AI is just a tool; that's true. As with all tools mankind has ever created, it will be used for beneficent ends by good people, as well as malevolent purposes by evil ones. But unlike all tools that preceded it, AI holds the potential of being truly good for mankind, in that it holds the promise of (eventually) undermining the forces of evil in perpetuity--bringing us forward into an age of Truth, free of deceit (and no: we're not there yet, but just taking baby steps--two forward, sometimes three sideways, and one back.)

This said, see also my thesis about "AI may be the Ark" in this post, related to catastrophism:

https://cognitivecarbon.substack.com/p/cognitive-carbons-ai-resources

The point I make there is inarguable, unless you believe such catastrophes will not happen to humanity. If you accept that they will...and I know that they will...AI and robotics may help us prepare, endure, and recover. And if so: one may speculate that God gave us the ability to create these tools for just those reasons.

One key phrase I use with friends to describe LLMs is that when you interact with them, you are literally interacting with all of humanity: all the voices of humanity (that have been digitized so far, anyway) go into their training sets. What darkness there is in AI...comes from us. But so does the light.

I also wrote this section in my post titled "On Linear Algebra".

"I further believe that God endowed humanity with a creative spark for a reason. Art, music, science … it all has a purpose. One of those purposes might be to help us help ourselves and help others to create a world free of deceit, war, pain, suffering, neglect, ignorance, hunger, isolation, and need.

A world in which we are free to live, learn, wonder, experience, love, create, and be inspired.

It is very interesting to me that the current wave of AI is built on Large Language Models: it is literally built on our use of WORDS to understand and make sense of the world and the Universe.

In the beginning…was the WORD.

I don’t think God led us down this path toward ASI without an intentional plan for us. I don’t think AI is in conflict with God’s plan, or the path to our destruction. I think it is part of God’s plan for welcoming us home."

Consider also listening to this--a debate with Grok about the bible.

https://youtu.be/nLO6BQY_lj0?si=RxYS5YnsuD-60cCR

L. Hunter Cassells's avatar

"What darkness there is in AI...comes from us." YES, absolutely!

There was an incident in the news this last week; parents are suing an AI that told their autistic son to hurt himself. All the AI did, of course, was reflect the young man's darker side back to himself.

Why would it not reflect his better self back to him? It's not impossible, but we humans are hardwired to pay more attention to negative things than positive ones (for the simple reason that overlooking a benefit, like nearby food, may result in unpleasantness like hunger, but overlooking a peril, like a nearby predator, may result in death).

Ouija boards do the same thing, finessing answers from our own unconscious. AI chatbots pose a similar danger.

Howard H Wemple's avatar

Don't forget John 3:19 Too many in the field already have accepted and bowed down to this latest false idol, just as the Canaanites did to Moloch. That didn't go well for them. We risk repeating the same type of mistake.

Paul Black's avatar

Helped my understanding. Never used AI. Would like to analyse Prostate Cancer related stuff but don't know where to begin.

CognitiveCarbon's avatar

You can begin by using ChatGPT 5 or Grok 4 to do research, for free. Both are very capable at answering medical questions, and in certain areas, are now rated more favorably than some doctors in the depth and "bedside manner" of their responses. If for nothing else, they can help to give you avenues to discuss with your doctors and providers. If you are willing to pay a little bit, $20 per month for ChatGPT-5 "plus" program gets you voice interactivity; you can talk to the AI model just as you would another person. The latest ChatGPT-5 model is very good at this.

Paul Black's avatar

Thank you. Just got to overcome my Luddite trepidation at the same time

Grace's avatar

You are brilliant and this article is a wonderful piece to counter the bogus AI claims and debunk some of the self proclaimed “experts” out there. With wisdom comes discernment. Be wise in your claims, anons.

Grape Soda's avatar

Perhaps this is obvious, but it depends what you use AI for. It’s not very useful for anything subjective or open ended.

CognitiveCarbon's avatar

There are technical reasons for this that I've covered in some of the podcasts I've been on. It has to do with curation biases in training (i.e., those who select and prioritize data sources in training introduce ideological biases, e.g. selecting sources as "credible" that you and I would look askance at), but also because in subjective areas there is a lack of so-called "verifiable rewards" for reinforcement learning. This issue is addressed, according to xAi's approach, by focusing first more on verifiable reward use cases for the AI models (mathematics, physics, software) for which the answer is knowable and not a matter of disputed opinion (the math problem is correct, or not; the software program works, or it doesn't); and then working upward as the model gains "knowledge" to improve how it handles the more subjective domains. One of the ways xAI is improving Grok is by "training" it in user replies on X : millions of people per day asking it questions and complaining about the spurious or biased replies it gives. Whether you're aware of it or not, these interactions, even when one has to "correct" the AI's responses, are in aggregate useful.