Sunday, 14 August 2022

Physical systems have fixed parameter space dimensions, altho the parameters are arbitrary

https://www.nature.com/articles/s43588-022-00281-6

It appears that while the choice of variables is arbitrary, models of physical systems must have sufficient parameter space.

After training an AI to model a physical system, the researchers repeated it to see if the AI would arrive at the same solution. It didn't but the number of variables was constant, and matched the existing physical models.

I wonder what would happen if this AI was trained on itself?

Isn't the result indeterminate, or actually infinite. Starting with random weights it would be a hugely complex system to model with the number of weights being the number of hidden variables. But could it actually deduce its own hidden variables? The question is whether the error propagation function led to diverging, converging, or just chaotic system. At its limit the answer would be the number of variables (weights) in the actual AI. But would it end up modelling a virtual machine "within" this system. Perhaps the actual behaviour of the system would be considerably simpler than the underlying hardware, and could be modelled in fewer weights. The only constraint would be  kind of "fixed point" solution where the number of variables in the model that was training was equal to the number of variables in the output of that same model. 

No comments:

"The Jewish Fallacy" OR "The Inauthenticity of the West"

I initially thought to start up a discussion to formalise what I was going to name the "Jewish Fallacy" and I'm sure there is ...