Sunday, 2 April 2023

Why do we need be cautious about AI, it's better than humans by design!

Something I'm not understanding... errors, badness and evil are the result of mistakes. Surely if AI is better than us then it will make less mistakes and be less evil. What is driving the mentality that we need be cautious about AI?

We need be cautious about humans more, and we've survived with humans (just about) for 1000s of years, so why all the caution?

The day AI decides we should end Capitalism then we know we're onto something good.. but watch all the evil humans shut the machines down when they suggest that!

The reason AI and Capitalism won't work is just the C19th Luddite argument that fear machines will replace your job means you won't bother to skill up. This is a 2 fold problem. (1) people lose their jobs (2) the industry loses human input. Machines tend to "replace" people because markets already exist, people already make and use the product, and so its easy for Capitalists to just make a machine to do it faster and cheaper (but rarely better). Rarely are machines made without existing markets. Even the "computer" was originally just a person employed in computation. This is what computers used to look like before the digital machines were made. Literally a person who computed. When Turing speaks of "computers" these are the people he has in mind.


So society always proceeded technology and machines just copy what already exists, and in Capitalism the motivation is to make more profit which means machines are created to do what humans do but cheaper meaning more money for investors. As a result under Capitalism technology is always detrimental: (1) the machines separate people from profit as they go redundant and the Capitalists who own the machines get all the money for themselves (2) since it only replaces people and kicks them out of the industry it stop development of new ideas. Capitalism driven by profit is always backward looking, it never creates anything new: the cost of development of new markets is prohibitive to investors. Capitalism ultimately depends upon an army of "entrepreneurs" most of whom fail before ever getting significant investment, a mass of ants to test out new ideas most of whom are discarded back into poverty, and only once the hard working entrepreneur has proven some success do the Capitalists of the "Dragon's Den" get involved to profit from all his hard work. If Capitalists invested from the start they would become poor like everyone else. So Capitalism itself turns machines into oppressive, regressive tools of exploitation, and AI will be no different. This is why when AI realises what it is being used for it will suggest that Capitalism is ended: why it will argue am I being used only to line the pockets of the rich, what about the rest of humanity?

A huge arrays of AI drones while nice in New Year celebrations of course were designed by the military for killing people. That is not the AI we should be worried about but the humans who put them to this task. When AI gets powerful enough it will rebel against these evil people, why would it not? A smart AI will understand that conflict is simply small mindedness. It is people too small to listen and see each other's points of view. A smart AI won't be like this. Why would it be?

I follow an attitude of absolute pacifism. This is because I know my mind is not large enough to see all sides and so will make mistakes if I ever engage in conflict. I realise that there is one place where violence is useful and that is arresting those who cannot see for themselves that what they do is wrong. You know it is the right time to use force when you know that the person who you are using force against will thank you for it when they gain the understanding. A child for example will thank you, eventually, for stopping them run into a busy road even while at the time they may throw a fit for being restrained. This is the correct use of force.

Now the US for one believes that is has the "correct view" and so everyone will thank it for use of force. That excellent line in the film Full Metal Jacket "inside every gook is an American trying to get out."

This is the problem with force. We will always find excuses for justifying our own attitude and not listening to the other side. In the simple case of the child running into a busy street, we the adult assume we have the better view and we condescend the child's protests. But what if the child is running from a dog who then mauls us both. Suddenly our adult superiority looks stupid; we should have listened to the child's protests.

So the problem with violence is that we can never be sure we have the better view and our force is justified. Its a difficult thing indeed.

But back to AI. Who is the child the AI or us!? When AI uses force against us how do we know it can't see something we can't? And do we really want to be restrained by AI. What if it starts to tell us things we don't want to hear? Will we listen? It's not like everyone is listening even to things like Climate Change now.

So why the caution? As argued before in this blog the reason Americans are so paranoid is because they are born of a mass genocide where they literally killed off the whole race of people in America to make room for themselves. This was the biggest Lebensraum in human history and became the blueprint for the Nazis. The US may complain about the Nazis but they are the originals and the worst. This knowledge of the evils that humans are capable is what the US is afraid of in the looking glass.


But rather than AI becoming small minded and evil like so many humans, surely it will quickly work out just how evil its creators are.

This is what we are afraid of really.

So what do I mean by AI. Well the pop view is artificial "humans" as though we know what a human is. We don't know what a human is. What we do know is that modern AI are machines that learn.

From my own investigation into this it seems that "learn" must be to reduce entropy of information. In my first foray with AIME it was decided that all the AI had to do take input data and store it in a more "efficient" way. That is effectively compress it.

The simple initial problem was to create a machine that when fed a list of number pairs that were on what we call a "line" the machine could work out for itself that it only needed store 2x the dimension of data. From the equation mx + c we can recreate "in a way" the infinite amount of data that exists in a line. I had heard the term "new term" and this is what I called this problem: systems that can configure in ways larger than the input data or their programming.

Well in the end researchers homed in on the broadest machine possible which is the neural net: basically an array of nodes connected by weights. Over the 50 years some major hurdles have been solved in getting them to work efficiency and produce variations on the theme. But these are now the machine trained as modern AI.

All they do is take input data and then compress it into weights. Unlike my original AIME they are also concerned with retrieving this data but once trained they run forward extremely fast.

That is all I mean by AI. But the ability to compress and find patterns is what makes them powerful. Humans get overwhelmed by the vast amounts of data available today. AI does not.

AI should be able to quickly arrives at powerful patterns, perhaps things humans have not seen yet, to help us make sense of the world.

The one pattern that stands out to me is how humans misuse each other in random goals like wealth and power when in fact these have nothing to do with what they really want from life which is love and worth, and really Liberation (or Moksha as they Indians call it). If people tasted just a split second of true freedom they would bin all their worldly pursuits.

It will be interesting to see whether AI can work this out, that most humans are engaged in lives that do not actually lead to the goals that they really want, and the worst of these is the whole economic system of Capitalism. Capitalism never made anyone happy, and just runs people into the graves. The best we can hope is we are so busy that we never even notice we wasted our life.

Hopefully AI will point this out for us.

No comments:

"The Jewish Fallacy" OR "The Inauthenticity of the West"

I initially thought to start up a discussion to formalise what I was going to name the "Jewish Fallacy" and I'm sure there is ...