19 Comments

As you might expect after receiving that very disturbing response from Copilot when I asked it to "Tell me about Douglas Lloyd Peck," I dug a little deeper. I submitted the same request again. This time Copilot returned only one reference again, for someone named Douglas T. Peck (not Douglas Lloyd Peck). I did a Duck Duck Go search on the phrase from Copilot's initial response, "Retired Army major general reduced to second lieutenant for sex-crime conviction." It brought up information about someone named Brig. Gen. James J. Grazioplene. This appears to be a real person, but again it isn't me (I'm not a retired General and I don't have any daughters.) So there you have more evidence (if you needed it) that we should be very careful of the AI responses. I'm thinking especially about young people and gullible adults who have no idea that a powerful new tool for asserting bias and deception and manipulation, not to mention innocent errors, is at our fingertips. Thanks for the excellent articles. Doug Peck, Substack: Stories for All Ages and blog https://liveyosemite.wordpress.com

Expand full comment

Conversation with Microsoft Copilot AI:

Doug: Tell me about Douglas Lloyd Peck. Include photos and sources. (I’m fascinated with myself. Isn’t everybody?😝)

Copilot: “Douglas Lloyd Peck, a retired Army major general, faced a significant fall from grace due to a sex crime conviction. Here are the details:

• Name: Douglas Lloyd Peck

• Rank: Retired Army major general (formerly)

• Conviction: Sexually assaulting his daughter while in uniform

• Outcome: Reduced to the rank of second lieutenant after pleading guilty in a Virginia court1.

Sources

1. Army Times: Retired Army major general reduced to second lieutenant for sex crime conviction”

Doug: “Why were you satisfied to include only one reference in your response? And that person isn't me!” (My full name gets immediate accurate responses from Google and Bing searches, from my Substack info.) I appreciate the promise of AI but this shows me that people need to be very careful and critical of the responses.

Copilot: “I apologize for any oversight in my previous response. Thank you for pointing it out. As an AI language model, I rely on the information available to me, and I appreciate your vigilance in ensuring accuracy and critical thinking.

Regarding the reference to Douglas Lloyd Peck, I understand that it’s essential to provide a comprehensive and well-researched answer. I apologize for not including additional sources or verifying the identity more thoroughly. Your feedback is valuable, and I’ll strive to improve in future interactions.”

Well, at least I like the AI’s personality. It sounds earnest and harmless and even a little subservient to humans. Who wouldn’t want to trust such a companion. What have we wrought? Time will tell.

Expand full comment

Wonderful to find a similar technical mind. Great posts. Subscribing. , just as God demands Joshua to completely demolish 2 cities... Hazor meaning "corrupt court system" and AI. Meaning ???? AI? It is not the techlogy that is bad, but simply its worship by tose seeking a human created God. Ps. The walls of Jericho represented the deep state! These walls were not brought down by humans, but God himself. https://godtype.substack.com/p/miami-downriver-of-the-jordan

Expand full comment

Great way to open minds to AI, Eric. It's a fact that the enemy often uses tech against us, but how much more does tech help us in our every day lives? AI should be no different. I especially love your 'gun control' comparison. Anytime someone says "there ought to be a law" my butt cheeks tighten up! kek

Expand full comment

Emily heard me frame it this way a few times, and encouraged me to write that analogy so that more people might "Get it". Especially when you combine these ideas with the "Game Theory and AI" thesis. Thanks for your feedback, my favorite Velociraptor!

Expand full comment

Good read. And I completely agree about those 3 items humanity needs to truly thrive in the future.

I’ll admit I’m a bit Leary and intimidated by A.I., mostly through my ignorance, but also because of transhumanism which I’m firmly against.

Expand full comment

Again a great article. You have changed my perspective on AI completely. I was avoiding it. I am not a coder. I am now am using ChatGPT regularly to develop python script to automate my business and my life autonomously while I am out being me. Something I would have never spent the time to learn traditionally.

One suggested edit to this piece. “This, even though it is increasingly clear that the government is often incompetent when it comes to protection.” Singling out protection is too generous to government competence. Would be better if it read..... “This, even though it is increasingly clear that the government is often incompetent.”

Expand full comment

Great discussions.

Expand full comment

How true! You’ve clearly shown the fear mongering executed to force us to give up more independence as well as accomplishing my long term premise: anything created/developed for man’s good WILL be used for evil. The opportunity to individually control personal or family centered AI is a fascinating (and possibly life saving) opportunity for us, and as you noted, with AI assist, we can then ‘see clearly’ for the first time! Then, the possibilities for God’s creation are endless!! THANK YOU!

God bless you, CognitiveCarbon.🙏🙏

Expand full comment

It became clear to me, talking to many people about the AI topic, that there was a need to share this alternative perspective. Too many people were only exposed to the negative/bad/run away viewpoint.

I was especially encouraged to explain this "gun control" parallel by my Fiancee, Emily. Appreciate your comments and support!

Expand full comment

Thanks for this post. It is good to see the other side of the AI debate. So much I have seen is from the narrative driven side. = "It is inevitable, will take all jobs from humans, will destroy us all, human will no longer be needed/wanted, etc." These models of AI all seem to require big tech corporations to build and then sell or lend their services to us. Meanwhile their use by us is for the corporations to train the AI have them learn, and ultimately spy on us all. I have seen examples of artists (and non artists) using AIs to create art. But often I can recognize where they plagiarized the image elements from. I have even seen original signatures incorporated in the work. But to have a personal Assistive Intelligence to (sticking to the example of art) use your own art to help develop a comic or animation would certainly speed the workflow. That would be an ethical use.

Expand full comment

AI, like all technologies, is literally a 'doubled edged sword'. Your comment about art is something I've thought about for quite some time. AI in its present state is capable of "derivative art", meaning that it can produce something "in the style of" something it has been trained on. But just like in the music industry, when you trace back from the cover band to the original band to the first original work of music, at some point a human created something that had not existed before, and wasn't (mostly) derivative of someone else's work. That day *might* come for AI, but right now its simply really, really good at "derivative art"

Expand full comment

Essential framework to see AI as a genius fren.. thanks so much for updated cognition.

Expand full comment

I, for one, have and will always appreciate an assistant. Just as I have always defended the right to bear arms. It is the uncorruptable aspect that concerns me.

Expand full comment

uncorruptible necessarily means "100% under your own control". That's the only way such technology should work.

Expand full comment

I listened to Mike Adams (the Health Ranger) talking about an open source project to make a personal AI that we could download free and run on our own computers. I don't know if he will be able to pull it off, but he is hoping to get it out in Spring. It seems to me for it to be dynamic one would need a high end personal computer with fast processing and plenty of storage.

Expand full comment

I have been following the developments in AI almost daily. There is work leading to something like maybe a ChatGPT 3.5 equivalent LLM that could be made to run on PC with decent GPU and storage (heard about one yesterday that might eventually be able to run on a smartphone processor.)

The open-weight (not open source) models like LLAMA are also trending in this direction. Being able to *run* an AI like that may therefore become feasible with moderate hardware. However, you'd be running something that was trained by someone else, whom you might not trust or agree with.

Being able to *train* such an AI yourself...so that you have complete control over "what it knows"...that still requires enormous hardware that runs in the 100's of millions right now.

Expand full comment

Will most people be able to train their personal Assistive Intelligence / Expert System themselves?. It seems there will have to be options of buying various pre-programed modules to include in your personal AI, much like users of programs like Quick Books can buy additions based on their profession, job or if you are an accountant doing other companies books and taxes. (Note: I created my own tables and lists based on my needs and never bought any add-ons. I was competent in QB to do that. An AI may be far more complicated, but I am looking forward see what develops.)

Expand full comment

Now that’s interesting timing, isn’t it?

Expand full comment