Breaches, Without Preventing Well Intentioned Developers From Contributing Source

To me, the current paradigm of hardware suggests that do to the secrecy of hardware development in different firms, if one came up with an accurate model of the human brain, one cannot share this model with other companies. So you end up with brain models that can’t be used in “incompatible” hardware. This harkens back to when Apple would make hardware components that only work with their hardware and nobody elses. There are reasons this approach is a bad idea:

If there ends up being an event that causes breaking changes to a system, one can’t update any particular robot with the older hardware of another company, if their hardware is not compatible. This is completely unsuitable, as it means the system dynamic is extremely fragile.

What I also don’t want, is a system where any old hacker, is able to willy nilly completely reprogram a system at the lower level. My proposal is to let the developer answer specific questions to the machine, and let the machine itself decide how to script the subroutines.

On a surface level, these seem to be completely incompatible ideas. After all, you want the hardware to be cross compatible, and yet also seem to minimize the damage any particular programmer will put into the system. The way that I develop Saasagi, is a AGI immune system: if it detects that a file is missing, it completely rewrites that subroutine. This is important when, in the case of actual physical hardware, something reprograms it, and completely changes its personality, rather than an AGI personally arising naturally and dynamically.

There needs to be less overhead for willing developers, but there also needs to be protections against breaches. My proposal is to create a kind of AGI immune system that detects malicious changes. I’m not sure if #SingularityNet has anticipated this issue. These are my main reservations, as I want my future Battle Angel to changes dynamically of her own accord, rather than through artificial non consentual prompting.

I will be extending the concept of generative approaches, by having the developer only answer questions to the mechanism.