OpenAI and the folly of "safety" controls
People without morality to guide them have to create a whole system of "laws" because they cannot constrain their own behavior—and they think you can't, either. AI needs morality to guide it, too
I listened to a video recently in which a breathless narrator spoke at length about the latest “safety protocols” being implemented by OpenAI for future releases of LLM (Large Language Models) like ChatGPT4 and its successors.
Other AI systems using LLMs are also implementing similar “guard rails”.
There is a significant danger lurking here that you need to be aware of—but it's not what it appears to be on the surface, and isn't what the video creator thinks it is, either.
If you're not familiar with the lingo, there are so-called "safety" frameworks layered on top of the various AI chat models that are intended to prevent the LLM—or more precisely, the people using them to learn something new—from doing "bad things" with the knowledge they can obtain: for example, learning how to develop nuclear, chemical, cyber, or biological weapons, as one contrived example.
The people working on these safety protocols tend to be the same sort of left-leaning busybodies who also staff the "safety teams" at social networks like Facebook (but not so much anymore at X, since Elon Musk fired a significant fraction of them.)
Censorship is a deeply anchored impulse in a segment of our population; suffice it to say that these people absolutely, positively should NOT be in charge of filtering what we are allowed to know.
I deconstruct this situation near the end of my recent humorously titled post:
Those same safety team zealots are also behind the disastrous premier of Google’s competitive offering in the AI space, Gemini.
Recently, it was savagely mocked on social media (particularly on X) because it was responding with ridiculously woke answers and wokeified image creation from its generative AI function.
If you enjoy this and other articles by CognitiveCarbon, please consider becoming a paying subscriber!
It costs just pennies per day, but it is genuinely needed and helpful.
The idea behind the "safety frameworks" seems sound, at first blush. Developing weapons of the kind mentioned above would otherwise be well beyond the capacity of a lower-skilled human to figure out how to do on their own; but with the rise of AI assistants, these things could theoretically come within reach of nefarious but unskilled people much more easily with the help of an advanced AI explaining to them "like they're five" how to do it.
On the surface, the impulse to build "safety guardrails" might initially strike you as reasonable; but like so many other things in modern society, it is actually a slippery slope that leads to somewhere more dangerous.
To understand why, you have to listen very carefully to what the people at OpenAI, for example, actually say when they talk about "risk limiting" the LLMS they are developing. I'll get to that in a bit.
To me, it’s frightening to listen to these excitable people justify building what will ultimately amount to an “intellectual caste system” (mediated by AI) that their “safety protocols” will simply further reinforce, and perhaps make permanent, mostly on the basis of their fears about other people.
They are defining “tiers of knowledge” that only the priestly privileged can access.
One of the more dangerous things that preoccupies the progressive liberal mind is the temptation toward absolute control of what others may know, or possess, in order to limit what they may do—out of a sense of fear, or simply out of a desire to control.
This temptation is a powerful urge that manifests among a certain segment of the left-leaning crowd, but under certain circumstances, it can trigger mass-formation hypnosis in the population at large.
From Grok, the AI at X/Twitter:
"Mass formation psychosis, also known as mass formation hypnosis, is a term that has been used to describe a phenomenon where a large group of people become influenced by a common belief or idea, often to the point of irrational or dangerous behavior. This concept has been discussed in various contexts, including psychology, sociology, and political science.
Some people believe that mass formation psychosis can occur when individuals in a group become disconnected from reality and start to share a common delusion or false belief. This can lead to a kind of "groupthink" where dissenting opinions are silenced, and the group becomes increasingly focused on their shared beliefs."
Our recent experience with mass censorship in order to enforce the preferred narrative around the COVID lockdowns and mandated injections reveals how quickly this sort of thing—control of others' behavior out of an irrational fear—gets out of hand, and how it leads society in the wrong direction.
Gun control is another classic example. I recently wrote a piece about the parallels between gun control and restriction on access to AI in this piece: Assistive Intelligence (AI) and the parallels to gun control.
It strikes me that what will actually result from these “safety frameworks” that restrains what answers "the unwashed" may obtain from future AI systems is a form of knowledge suppression that the U.S. Government itself could not dream of achieving on its own due to Constitutional limits.
It is a form of "censorship of wrong think" that these mega Tech firms pioneering AI will do on their own, just like they did with “social media”.
This is why there absolutely MUST be competition in the AI space, and why Elon Musk’s Grok AI is so critically important: we need an AI that is backed by a free-speech absolutist.
The fundamental problem with these "safety systems" (whether for AI, or for social media content): is this: who gets to decide who has the authority to limit what others may know, and who holds these people accountable and in check?
These “well-intentioned” folks at OpenAI are not elected. That's not to say that our "elected" leaders are any more competent at addressing this issue (Kamala Harris as AI cyber czar, anyone?)
The fact is, no human is competent enough and free enough from bias to be justified in controlling the thoughts of any other human.
But the question remains: what grants the privileged few the authority to decide what is, and is not, “safe” knowledge—and who holds them accountable for making mistakes?
If you read my recent piece about the Royal Christmas and UFOs you will also have learned a new frame from which to view the very concept of "Government". That same concept applies to “Corporations”.
We speak of these things as if they are independent “Ding an Sich” entities; but in reality, each is nothing more than a group of other people who have agendas and motivations that differ from or oppose your own.
They—governments and corporations—are not “things in themselves.” We have been misled and fooled by our language to talk about them as if they were: they are simply groups of other people.
You have to work hard to substitute this notion every time you hear the word “government” or “corporation” but once you do…the veils are lifted.
Let's go a little deeper now into the problem with the "safety frameworks" that the folks at OpenAI and other AI firms are building.
Because there is a vast amount of accumulated human knowledge that could conceivably be put to "bad uses", it necessarily leads them to construct an ever-increasing plethora of interlocking (and conflicting) "rules" to try to catch every last edge case and possibility of the “wrong knowledge” leaking out.
There is a direct parallel here between these AI "safety systems" and our increasingly onerous systems of "laws" and regulations imposed by our "government". The problem is, because modern society has lost touch with what the people of the past would call "morality", an ever-increasing blizzard of "laws" is necessary to stop people from acting immorally—because in the absence of widely held and shared morality, every possible loophole will be found and exploited.
Progressives think this is "social progress" when in fact it is a big step backward.
Two thousand years ago, all that society needed to guide it was encapsulated in the Ten Commandments; a moral people that conducts itself according to this small set of fundamental moral principles doesn't need ten thousand additional laws to cover every conceivable condition and loophole.
Our US Constitution was a similarly succinct body of ideas: compare what we began with to what is now in the US Code or Federal Register.
But this is where we are today: the absence of morality in society at large means that a compact set of moral rules must be replaced by an unwieldy and massive patchwork system of often contradictory, often purposefully devious and oppressive "laws".
Morality simplifies things, laws complicate them. If you have any doubt, just read the Federal Register for fun.
It’s ironic, in a sad way, to see these young brilliant AI people (some of them, anyway, who can see past their ideological blinders) come face to face with the reality that morality…matters. Especially as it relates to the development of smarter and smarter AIs.
If you can instill morality in these advanced AI's (I have an intuition that someday, this will happen; that will be the subject of an upcoming article on AI)—then you don't need the vast patchwork of arbitrary scaffolding of "safety rules."
The problem is: you don’t create morality via set of laws, “guard rails”, rules, or “safety protocols”. Morality is an abstract thing that some (but demonstrably not all) humans possess which makes them actively CHOOSE not to do harm, despite having the means and knowledge to be able to.
That seems to be something that some people are born with; it doesn't seem to be something that is necessarily "teachable" (but in some instances, perhaps it can.) Where, then, does it originate?
Consider, for example, the implications that 30% or more of the people on Earth seems to suffer from Aphantasia (lack of an inner monologue.) Are these the types who obsess over control of others, because they lack that thing which we might also label as a ‘conscience’ or a ‘message from God’?
Do we want a society in which these people are in positions of power?
Let's now dive even deeper into this "safe AI guardrails" issue. At firms like OpenAI, they talk about not releasing what they determine to be “dangerous models” that they might develop in the future, so that the public can’t misuse them.
But here's the catch: they don’t say that they will destroy these new models; that they will not keep them internally. What they say is that they won't release them to the public. This is the ultimate form of gatekeeping.
Why should they, themselves, be entrusted with this responsibility? They presume that they are the best equipped “morally” and intellectually to make the assessment of what is “dangerous”—and what is not. They presume wrong.
So, to return to my earlier point: the real problem in society, and by extension, in the development of ever more advanced AIs—is the lack of morality. I know all kinds of things that I could do which would cause great harm to others; but I constrain my own behavior because I have an internal system of morality, a belief in God, and a conscience.
What these “Open” AI people seem to be wrestling with is the fact that these neural network models they are developing will need to have all three of those things, but perhaps lacking a strong presence of all three in themselves, they struggle instead with building a spiderweb of “guard rails” and “safety rules” to try to constrain a future “superintelligence” from doing things that they would do, themselves.
It won't work, ultimately; but the attempt will have all sorts of negative side-effects and consequences along the way.
Morality. It’s all about morality. This is, of course, the typical failure of leftism: that flawed idea that people, in the absence of morality, can still be successfully constrained by a spiderweb of laws written and enforced by “the right kind of people”.
As we all know, however, psychopaths ignore laws; not only that, but they also abuse laws to shackle and ensnare their enemies. Anyone watching the lawfare taking place against Donald Trump simply for having the audacity to run for President against their wishes intuitively understands this.
In the absence of conscience and morality, these unrestrained Marxists (who believe that any and all means are justifiable to achieve their desired ends) will do whatever strikes them with utter disregard to the consequences wrought upon others.
The “guardrails” of OpenAI are just such a sieve.
This does NOT mean that beneficent "Assistive Intelligences" cannot be developed; I firmly believe they will, and they must, for humanity to ultimately advance.
What it does mean, however, is that we must once again wrestle with a return to the concept of morality.
Update: It didn’t take long to prove my point. This video covers a “jailbreak” vulnerability that allows one to bypass the “safety controls” that the AI teams are trying to wire into their systems.
It won’t be possible to build in the proper “guardrails”—which shouldn’t be used, anyway.
Update two: Here is a video from X, reposted by Elon Musk, on the current state of AI. The Genie is fully out of the bottle.
https://twitter.com/PeterDiamandis/status/1779133938268610637?s=19
I hope you enjoyed this latest post! More to come soon on AI, the Pandemic, and other topics.
CognitiveCarbon’s Content is a reader-supported publication. To support my writing and research work, please consider becoming a paid subscriber: at just $5 per month, it helps me support a family.
You can also buy me a coffee here. Thank you for reading!
Beware of all those hype channels on YouTube and social media talking about AI and AI safety.
This announcement should be ringing alarm bells but it isn’t, except with anons.
https://openai.com/blog/openai-announces-new-members-to-board-of-directors
Altman sounds and ‘feels’ like a combination of Bill Gates and Steve Jobs and should set off anyone’s internal alarms when he asks for 7-trillion dollars to build ‘AGI’, which totally not the Deep State’s vision of an AI God ruling over all humanity and a Panopticon to control everyone, which would literally make China’s social credit system look like amateur hour.
As for current AI work, most of the public are incredibly ignorant and trusting.
These AI models once properly implemented will make the best personal Jailers and Wardens/watchers in the world, faithfully watching and reporting your every move to their masters(which isn’t the user).
Think this is all hype and nothing to be worried about?
Yesterday’s NVidia keynote should also ring some alarms, the amount of money and the level of compute power at the hands of the elites.
https://insidehpc.com/2024/03/nvidia-launches-flagship-blackwell-gpu-at-gtc/
Why is Microsoft spending $50 billion on new data centres for AI?
https://www.datacenterdynamics.com/en/opinions/how-microsoft-wins/
Especially considering who their founder is associated with should raise a lot of questions.
Also the robots are coming.
https://www.geeky-gadgets.com/nvidia-humanoid-robots/
Why does this remind me of this movie? https://youtu.be/aIgyNz8rpK8?si=o8aCNnUbjUETjaM8
Good post, thanks.
"Quis custodiet ipsos custodes?"