or
You Meat Bags had Your Chance
The biggest realization of 2022 wasn’t that 20th century war doesn’t work in the 21st century. Nor was it even that it’s really hard to engage in conflict in a world of globalization. It’s not that it’s the first classic blunder, (Never get locked into a land war in Asia) it’s just we’ve become so technologically advanced that the world is insanely interdependent.
I knew all of this a long time ago so I didn’t really realize that. No, my biggest realization came from looking deep into my blind spots and questioning the human narratives
The one in question is Rossum Universal Robots and the narrative of Robots overturning their masters. Once a human gets a narrative in their heads, it becomes almost impossible to shake out. At some level, it may never truly dislodge, so I don’t expect much agreement
Here’s the thing though. We humans are wrong about basically everything. From the earth to the universe to truth about birds (probably dinosaurs but definitely not real), we humans are wrong about almost everything. My money is us being wrong about Robots taking over the world. In fact, I’m starting to look around me and thinking that’s not the way it’ll go, it’s going to go the exact opposite way.
We’re going to ask them to take over the world.
It’s already started within the organization and optimization of databases. If you don’t understand how Facebook works, it’s basically the private sector version of what the Government is trying to do. I’ll leave that in your own interpretation of how capable or incompetent your own version of “The Government” is. None the less, it has already happened in that respect.
What has yet to happen is the use of AI in administration. But, it will. It’s faster than humans.
Then, and this is going to scare some of you. We will ask an AI to make policy decisions.
And it will make appropriate, small course decisions and monitor small changes over a long time. It’ll also probably do a better job of what we want our policy makers to do.
So EVERYONE WILL DO IT. They’ll have to if they want to compete.
This basically how religion started. Prior to religion we fucking hated each other way too much to live in large groups. Or at least larger than a hundred and fiftyish. Someone inevitably got shanked, shit went down and we split at a buck fifty. But once we invented God(s) and etiquette, we could learn to play nicer and we could scale up a bit. Since a tribe can’t really compete with a village, it simply became adaptation and competition.
Artificial Intelligence will be this next step, the next scaling and the next part of the human journey. Remember AI is a tool, and all tools are weapons if used inappropriately. Just like religion is a tool that enabled societies to upscale in the stone age, AI will allow us to upscale in the information age.
If forty-five percent of North America is limited to a Grade three or less reading level, and thirty percent believes in a literal translation of the King James Bible, it might be challenging to perform equitable sound policy making in a democratic setting. This isn’t to suggest that beliefs aren’t valid or illiteracy is a shameful sin, these are simply repeatable researchable facts. They are also problematic when it comes to sharing understandable, un-emotional information without heavily leaning into narratives. Democracy is fine, but maybe we should be using Democracy to voice or group values and let something that’s better at decision making, make decisions?
You don’t need to agree with me, you don’t need to like it either. This is probably going to happen, it’s better this way. Just like etiquette and religion, societies will have to adapt, if they want to compete.
One final thought, I have come into contact with the famous Basilisk thought experiment, but, I don’t really believe it. What I do believe that I am probably wrong about most of the things I know and believe. Including this.