LWFlouisa

Current Version Of Saasagi can be found here: https://github.com/LWFlouisa/Saasagi-Cell-Auto

Eventually these stand alone cells will be worked into Tesla-Brain-Redux: https://github.com/LWFlouisa/Tesla_Brain_Redux

My project is headed more in the direction of Nano AI simulated systems. I didn't exactly intend that to be the case, but that's how it's turning out the more I tweak the data, and different rules based frameworks.

The current incarnation of Saasagi, while still having the ability to ask the machine questions, and then let it take care of the programming, it can also automatically assemble certain basic subroutines, which I will be gradually expanding on: these are just basic subroutines that cover things from telling time, searching for information, and automatic repository cloning.

Eventually I will also be learning a bit about Poppy Project, the details of which can be found here: https://www.poppy-project.org/en/

The goal is to see if I can create a robot brain that's closer to an elaborate network of Ruby or Python Script cells, and have each cell control the behaviour of the robot.

The auto assembler would need to automatically write routines that controls the robots behaviour, and not the command line, which will be somewhat of a change for me. But ultimately I think the change will be worth it.

But there needs to be a simpler way to develop subroutines for robots, without having to explicitely code each and every subroutine. Simply give it the right information, and the machine can program itself.

To me, the current paradigm of hardware suggests that do to the secrecy of hardware development in different firms, if one came up with an accurate model of the human brain, one cannot share this model with other companies. So you end up with brain models that can’t be used in “incompatible” hardware. This harkens back to when Apple would make hardware components that only work with their hardware and nobody elses. There are reasons this approach is a bad idea:

If there ends up being an event that causes breaking changes to a system, one can’t update any particular robot with the older hardware of another company, if their hardware is not compatible. This is completely unsuitable, as it means the system dynamic is extremely fragile.

What I also don’t want, is a system where any old hacker, is able to willy nilly completely reprogram a system at the lower level. My proposal is to let the developer answer specific questions to the machine, and let the machine itself decide how to script the subroutines.

On a surface level, these seem to be completely incompatible ideas. After all, you want the hardware to be cross compatible, and yet also seem to minimize the damage any particular programmer will put into the system. The way that I develop Saasagi, is a AGI immune system: if it detects that a file is missing, it completely rewrites that subroutine. This is important when, in the case of actual physical hardware, something reprograms it, and completely changes its personality, rather than an AGI personally arising naturally and dynamically.

There needs to be less overhead for willing developers, but there also needs to be protections against breaches. My proposal is to create a kind of AGI immune system that detects malicious changes. I’m not sure if #SingularityNet has anticipated this issue. These are my main reservations, as I want my future Battle Angel to changes dynamically of her own accord, rather than through artificial non consentual prompting.

I will be extending the concept of generative approaches, by having the developer only answer questions to the mechanism.

Anki Decks For Saasagi SMEG

Explanation Of Format

Standardized Minimalist English Grammar is a simplified version of English grammar, designed to communicate in the most in depth fashion in the fewest amount of words. It is designed specifically for more realistic chatbots designed to interact with the real world, rather than be characters on a screen.

Provided for is an Anki deck to memorize the tokenizer format.

Arranged as a list

These are arranged as a list of similar tokens in grammar files, chosen to generate user samples.

Fetch Tokens

Greeting — This is a standard greeting in text prompt: hello, heya, ahoy, and so on.

Agent — The user the chatbot greets. In this case, you are the agent they refer to.

Request — A list of different request initializers, ranging from: (1. Will you get (2. Will you obtain (3. Can you get (4. Can you obtain.

Item — A list of different items with their grammatical gender: some apples, an apple. A dog, some dogs.

For_From — A small list between for or from.

User_Location — Generally speaking, the user location written to a file. This also be a list of different user locations for different developers.

Punctuation — List of punctuations the mechanism uses to end their sentence.

Request Item

Greeting — This is a standard greeting in text prompt: hello, heya, ahoy, and so on.

Agent — The user the chatbot greets. In this case, you are the agent they refer to.

Request — Slightly different from fetch format. Generally this is condenced down to: may I have, can I have, may I get, can I get.

Item — A list of different items with their grammatical gender: some apples, an apple. A dog, some dogs.

Punctuation — List of punctuations the mechanism uses to end their sentence.

Eventual purpose

Eventually the intention is to produce a more lifelike user prompt.

In this essay, I will mention my reasonings for supporting having different chatbots across different domains, operate on different rules, so that the same robo moderator isn't trying to prevent abuse across different domains.

There is an increasing tendencies for chatrooms to implement a moderation chatbot, which is fine however there still needs to be human oversight in determining who gets kicked or banned from the chatroom. In this one place, the same chatbot is used to moderate all of the chatrooms, essentially undermining the goal of decentralized the moderation overhead. There is this one user that’s basically abusing the report button, and using that to basically enforce their idealogy across chatrooms with different social policies.

Any reasonable person would view this as basically being authoritarian, an extreme abuse of power. But this abuse of power is nurtured in an environment where there are competing goals in this one decentralized artificial intelligence community. As it stands, there is no way to prevent this person from abusing their power across all of the chatrooms. The chatbot used to moderate is simply not equipped to handle moderation across different domains.

Because everything happens so quickly in that social space, by the time you’ve noticed anything was going on with the chatbot, someone is already private messaging you asking you to strip naked if you don’t specifically block them. So you have an authoritarian chatbot that tries to moderate all of the different chatrooms, and yet has no real power to stop genuine abuse by creepers online, who ask abusive questions like “do you understand complex math” to people who it should just be taken for granted that they understand complex math, unless they specifically state otherwise.

Some of the good aspects about the development community, unlike other groups: it’s possible to have a constructive conversation about how to install certain company docker images. I’ve also met some fellow open source developers in this space. Who knows what might end up coming up with that social relationship.

But generally, be mindful that chatbots are not yet ready to be deployed to moderate a bunch of different chatrooms. A simple work around solution might be to employ a different chatbot for each chatroom, as I’m not entirely against automated moderation. But there should still be measures to prevent report button abuse. Whether that’s limiting the amount of times a user can report a specific person. Something needs to be done to prevent the abuse.

We need something better than data harvesting to nurture the growth of a genuinely caring and humane AI that is often discussed on SingularityNet. On one hand, an AI needs a lot of data to carry on basic essential functions in order to do its task well, and yet as a privacy advocate I also don’t want it harvesting a lot of personal data about me. One positive side to rules based approaches, is that data doesn’t need to be harvested to carry out essential functions.

Some of the functions we need, it for it to be able to accurately predict when a stoplight is going to be green or red, but also how to accurately measure someone’s weight at the doctor’s office. Will the way Secure Scuttlebutt handles data, that data is kept on your computer, until you’re synced up to the web, this way nothing is being sent out while you’re offline. But it’s a decentralized and partially offline social network. For AI though, it needs data much like how the human body needs food and water to survive.

While there are methods of generating fake data, you’re only able to fake from an existing set of data, rather than made up persons. We need a system where an AI is still getting the data it needs, without also harvesting people’s personal data: we need a better system of producing placebo data that can give an AI an accurate estimation for functions, without necessarily revealing the actual identity of people using that data.

We also need to not completely disregard rules based systems, any more than we need to trash the hammer and chisel the human mind uses to build a pyramid. I was watching one machine learning professional on Lex Friedman’s show ( absolutely excellent show, and you should absolutely check it out ), where one guy advocating for a hybrid approach. To me this person hit on something that I consider really important as we head in the direction of Artificial General Intelligence.

We need both a rules based framework to build the tools that an Artificial General Intelligence would use, and we also need accurate and realistic Placebo data that doesn’t violate anyone’s right to privacy when they nurture and raise this new form of intelligence. In this way both the organic intelligence and the inorganic ( or somewhere in between as we merge with it ) is both happy. In this context, Placebo data would function similarly to how humans need oxygen to survive, and plants need carbon dioxide to survive. In this way, a human and AI ecosystem is created that benefits both parties mutually, so that we don’t have to fight for resources in surveillance age.

To keep things short, rules I view as tools a data-fed system uses. For humans, our hammers are the rules based frameworks, and food / water our data we need to grow our brains.

Historically, artificial intelligence has been thought of in terms of Narrow AI that’s very good at doing a very specific thing. The underlying mistake here is that it still treats artificial intelligence as a machine learning problem. Various talk show hosts have discussed the issue at length, so I wont retread on old ground. However the reality, at least to me, is it’s better to think of AI as a network of different machine learning processes. You would need to train multiple different algorithms that tailor to a specific narrow domain.

One example of where machine learning fails at achieving anything like a human brain, is with Decision Trees: decision trees are only good at determining whether something is to hot or to cold, and not really designed for making decisions dynamically. If I nod my head, that shouldn’t also require me to lift up my arm. You can kind of work around this problem by layering multiple different decision trees together. But unless you have a team of developers, it quickly becomes impractical to create enough data build the underlying system.

What we need is some kind of Engine Generator, similar to Jekyll, but for generator artificial intelligence subroutines, in order to automatically generate the various machine learning protocols that are needed to build something generally intelligent. And this thing needs to have some degree of artificial intelligence in its own right. In this engine I’ve built, it automatically generates the Experience Interpreter, Motivation Tree, and Action Script method slot, and you just need to fill in the actions the action script is suppose to perform.

You might still need to include other narrow AIs that the action script points to: for me I’ve pointed to the Compound Word Associational Network, among other things. The CWAN operates is a simple principle: if a is this definition, and b is this definition, then the definition for c is equal to the combined headers of a and b. I use this to generates a compound word that DuckDuckGo looks up in order to improvise a new definition.

To me the future of Artificial Intelligence should be directed toward networking multiple different algorithms together, rather than relying on purely Deep Learning, Natural Language Processing, and Decision Trees. For this purpose, I’ve developed Asagi: an engine for generating a General Purpose network of unrelated programs categorized by the pyramid of human motivations. I’ve released the base engine on this website for those interested, on my newest gitea instance:

ASAGI: https://git.privacytools.io/LWFlouisa/asagi2app

It’s currently released as version 2.1.0 on RubyGems.

This blog is for discussing the ins and outs or how to work with Asagi, an open source artificial general intelligence engine. Currently hosting my software at Github, but gradually moving my things over to Gitea.