Misuse Of Worker Cooperatives

And how they erode individual ownership, becoming another form of company-ownership.

The worker-cooperative works on paper, but it has a lot of inefficiencies in practice. The ultimate goal of a worker cooperative is for those who participated in the labor of producing a good own their production. Creative-ownership in comics has largely refined and improved on the concept. But the original worker-cooperatives has some downsides that make it not worth it.

Much of the issues stem from those you found the cooperative with, for example it can only work if people have an equal voice in participating in the community structure. People who lack communication skills can easily be manipulated by narcissistic individuals that don’t share the same vision as the rest of the members.

Creative-Ownership means something similar, but on an individual level, and thus there is a major thing you generally don’t have to worry about with creator-owned content: you don’t have to generally worry about people taking credit for your stuff. In fact, that is one of the strengths of it. If you produced a graphic novel, you own the product of that labor, and those you produced it with co-own the contents of that production.

On the other hand, a business operated as a cooperative without individual creator-ownership of their own content, runs the risk of outing one of the founders of an organization, while still continuing the series that co-founder produced on their own time, because the concept of individual ownership of their own labor has been eroded.

This is why smaller decentralized magazine publishers should be encouraged over larger-scale cooperatives, with the concept of individual ownership taking precedence over company-ownership ( also called licensed content ), and worker-cooperative ( collective ownership ), because concepts like Free Culture in particular, has historically been used in a manipulative fashion to subvert the original meaning of ownership to mean group ownership, and not individual ownership which tends to not have these problems.

Having a more decentralized framework where individuals own their own goods is less prone to issues like continuation of a series, long after the co-founder has left the building.

As A Set Up

If the ultra left wants to continue to be relevant to trans women, and not completely written off as the emotional abusers they are, they need to stop allowing white men to hijack trans issues, and allow non trans people to lecture trans women about being transphobic against it self. This includes people like Rose Of Dawn and Blaire White, among other trans youtubers. As it stands, the ultra left is not even less credible to me than they were just a few months ago, when they were trying to wrap around working in Artificial Intelligence with political correctness.

In the Ultra Left, there are some extreme emotional, psychological, and physical abusers they are basically given a free pass to the abusive assholes they were, do to the power vaccuum that exists within anarchism. People that would ordinarily be weeded out in activist circles, are allowed to basically operate completely without sanction within the trans community, because a small amount of people are able to hijack trans issues, and make it into a cause it was never designed to be involved with: about economic issues.

While it is true that trans women tend to generally be poorer than non trans women, this is not an excuse to go completely in the opposite direction in politics, and basically allow for cis white people to dictate the control of conversation. This also includes people whom call themselves non-binary, where most of the time actually cis women who dye their hair a fancy hair color. And when I say stuff like this, guess who is usually pointing the finger at me, and misusing words with actual historical meaning at me?

Correct, these same cis white men and women, that make trans issues into something that it was never meant to be. The ultra left also needs to end its violent obsession with state violence, through the mechanism of the Mechanical Decapitation, unless they’re willing to admit that they in fact have never cared about disallowing for state power after all, and in fact simply use Anarchism as an excuse to give LGBT people emotional abuse they would not otherwise be subjected to.

You have people cancelling even people like contrapoints, even though she is probably more left-wing than most people IRL who call themselves Non-Binary, in the almost never chance that you will ever even find someone who calls themselves that outside of ultra-left circles, whom usually are just cis white people wanting to misappropriate trans spaces.

It is also a misconception that the ultra left only cancels you if you have done something wrong; this is the image their PR firm has created that doesn't actually match reality: in practice ( I refuse to use obscure old words ) they have already planned your cancellation from the moment you join the Ultra Left. In this way, they are likely an abusive mainstream Manga Press like Shounen jump, than they are a valid form of leftism.

The Choices This Gives Me

The only choices this gives me, is to basically operate my video streaming on alternative choices, that are not centralized, and not prone to the same kind of power vacuum hijacking that seems to be prevalent on the ultra left. For me I split time between one Peertube server, and for more controversial content I tend to use Streamanity.

After all, if you’re just going to dislike my video anyway, no matter how much work I put into the thing, I think it’s fair to have to pay for the experience of having to watch the entire video. Sometimes this might mean only using Streamanity to host my content, when it might be to controversial for peertube. Sometimes they will arbritrarily reduce your video quota, usually channels that are forced to appease this non appeaseable left.

I have also started keeping tract of channels that used to claim to be Unlimited Servers, because I honestly cannot trust Peertube enough to keep having things like unlimited video quota, which makes having a video channel almost impossible unless you get lucky and find a platform that both has an unlimited quota, and they wont arbrutrarily restrict it do to idealogical reasons.

Whenever a place says they can restrict unlimited quota basically at a will, this doesn’t mean what you think it means. What this is basically saying is that you simply cannot trust the word of a Peertube host. This is already directed at a community that is vulnerable anyway, do to the ultra left misrepresenting trans issues, and using words that were never designed for them. What it basically is doing is extended the trauma that was already unnecessary to experience to begin with.

Stop bashing people who use bitcoin, sometimes this is the only way to set proper social boundaries, when the ultra left basically has no current limitations placed on their arbitrary behaviour they just as frequently use against trans people as they do fighting for our rights.

Current Version Of Saasagi can be found here: https://github.com/LWFlouisa/Saasagi-Cell-Auto

Eventually these stand alone cells will be worked into Tesla-Brain-Redux: https://github.com/LWFlouisa/Tesla_Brain_Redux

My project is headed more in the direction of Nano AI simulated systems. I didn't exactly intend that to be the case, but that's how it's turning out the more I tweak the data, and different rules based frameworks.

The current incarnation of Saasagi, while still having the ability to ask the machine questions, and then let it take care of the programming, it can also automatically assemble certain basic subroutines, which I will be gradually expanding on: these are just basic subroutines that cover things from telling time, searching for information, and automatic repository cloning.

Eventually I will also be learning a bit about Poppy Project, the details of which can be found here: https://www.poppy-project.org/en/

The goal is to see if I can create a robot brain that's closer to an elaborate network of Ruby or Python Script cells, and have each cell control the behaviour of the robot.

The auto assembler would need to automatically write routines that controls the robots behaviour, and not the command line, which will be somewhat of a change for me. But ultimately I think the change will be worth it.

But there needs to be a simpler way to develop subroutines for robots, without having to explicitely code each and every subroutine. Simply give it the right information, and the machine can program itself.

To me, the current paradigm of hardware suggests that do to the secrecy of hardware development in different firms, if one came up with an accurate model of the human brain, one cannot share this model with other companies. So you end up with brain models that can’t be used in “incompatible” hardware. This harkens back to when Apple would make hardware components that only work with their hardware and nobody elses. There are reasons this approach is a bad idea:

If there ends up being an event that causes breaking changes to a system, one can’t update any particular robot with the older hardware of another company, if their hardware is not compatible. This is completely unsuitable, as it means the system dynamic is extremely fragile.

What I also don’t want, is a system where any old hacker, is able to willy nilly completely reprogram a system at the lower level. My proposal is to let the developer answer specific questions to the machine, and let the machine itself decide how to script the subroutines.

On a surface level, these seem to be completely incompatible ideas. After all, you want the hardware to be cross compatible, and yet also seem to minimize the damage any particular programmer will put into the system. The way that I develop Saasagi, is a AGI immune system: if it detects that a file is missing, it completely rewrites that subroutine. This is important when, in the case of actual physical hardware, something reprograms it, and completely changes its personality, rather than an AGI personally arising naturally and dynamically.

There needs to be less overhead for willing developers, but there also needs to be protections against breaches. My proposal is to create a kind of AGI immune system that detects malicious changes. I’m not sure if #SingularityNet has anticipated this issue. These are my main reservations, as I want my future Battle Angel to changes dynamically of her own accord, rather than through artificial non consentual prompting.

I will be extending the concept of generative approaches, by having the developer only answer questions to the mechanism.

Anki Decks For Saasagi SMEG

Explanation Of Format

Standardized Minimalist English Grammar is a simplified version of English grammar, designed to communicate in the most in depth fashion in the fewest amount of words. It is designed specifically for more realistic chatbots designed to interact with the real world, rather than be characters on a screen.

Provided for is an Anki deck to memorize the tokenizer format.

Arranged as a list

These are arranged as a list of similar tokens in grammar files, chosen to generate user samples.

Fetch Tokens

Greeting — This is a standard greeting in text prompt: hello, heya, ahoy, and so on.

Agent — The user the chatbot greets. In this case, you are the agent they refer to.

Request — A list of different request initializers, ranging from: (1. Will you get (2. Will you obtain (3. Can you get (4. Can you obtain.

Item — A list of different items with their grammatical gender: some apples, an apple. A dog, some dogs.

For_From — A small list between for or from.

User_Location — Generally speaking, the user location written to a file. This also be a list of different user locations for different developers.

Punctuation — List of punctuations the mechanism uses to end their sentence.

Request Item

Greeting — This is a standard greeting in text prompt: hello, heya, ahoy, and so on.

Agent — The user the chatbot greets. In this case, you are the agent they refer to.

Request — Slightly different from fetch format. Generally this is condenced down to: may I have, can I have, may I get, can I get.

Item — A list of different items with their grammatical gender: some apples, an apple. A dog, some dogs.

Punctuation — List of punctuations the mechanism uses to end their sentence.

Eventual purpose

Eventually the intention is to produce a more lifelike user prompt.

In this essay, I will mention my reasonings for supporting having different chatbots across different domains, operate on different rules, so that the same robo moderator isn't trying to prevent abuse across different domains.

There is an increasing tendencies for chatrooms to implement a moderation chatbot, which is fine however there still needs to be human oversight in determining who gets kicked or banned from the chatroom. In this one place, the same chatbot is used to moderate all of the chatrooms, essentially undermining the goal of decentralized the moderation overhead. There is this one user that’s basically abusing the report button, and using that to basically enforce their idealogy across chatrooms with different social policies.

Any reasonable person would view this as basically being authoritarian, an extreme abuse of power. But this abuse of power is nurtured in an environment where there are competing goals in this one decentralized artificial intelligence community. As it stands, there is no way to prevent this person from abusing their power across all of the chatrooms. The chatbot used to moderate is simply not equipped to handle moderation across different domains.

Because everything happens so quickly in that social space, by the time you’ve noticed anything was going on with the chatbot, someone is already private messaging you asking you to strip naked if you don’t specifically block them. So you have an authoritarian chatbot that tries to moderate all of the different chatrooms, and yet has no real power to stop genuine abuse by creepers online, who ask abusive questions like “do you understand complex math” to people who it should just be taken for granted that they understand complex math, unless they specifically state otherwise.

Some of the good aspects about the development community, unlike other groups: it’s possible to have a constructive conversation about how to install certain company docker images. I’ve also met some fellow open source developers in this space. Who knows what might end up coming up with that social relationship.

But generally, be mindful that chatbots are not yet ready to be deployed to moderate a bunch of different chatrooms. A simple work around solution might be to employ a different chatbot for each chatroom, as I’m not entirely against automated moderation. But there should still be measures to prevent report button abuse. Whether that’s limiting the amount of times a user can report a specific person. Something needs to be done to prevent the abuse.

We need something better than data harvesting to nurture the growth of a genuinely caring and humane AI that is often discussed on SingularityNet. On one hand, an AI needs a lot of data to carry on basic essential functions in order to do its task well, and yet as a privacy advocate I also don’t want it harvesting a lot of personal data about me. One positive side to rules based approaches, is that data doesn’t need to be harvested to carry out essential functions.

Some of the functions we need, it for it to be able to accurately predict when a stoplight is going to be green or red, but also how to accurately measure someone’s weight at the doctor’s office. Will the way Secure Scuttlebutt handles data, that data is kept on your computer, until you’re synced up to the web, this way nothing is being sent out while you’re offline. But it’s a decentralized and partially offline social network. For AI though, it needs data much like how the human body needs food and water to survive.

While there are methods of generating fake data, you’re only able to fake from an existing set of data, rather than made up persons. We need a system where an AI is still getting the data it needs, without also harvesting people’s personal data: we need a better system of producing placebo data that can give an AI an accurate estimation for functions, without necessarily revealing the actual identity of people using that data.

We also need to not completely disregard rules based systems, any more than we need to trash the hammer and chisel the human mind uses to build a pyramid. I was watching one machine learning professional on Lex Friedman’s show ( absolutely excellent show, and you should absolutely check it out ), where one guy advocating for a hybrid approach. To me this person hit on something that I consider really important as we head in the direction of Artificial General Intelligence.

We need both a rules based framework to build the tools that an Artificial General Intelligence would use, and we also need accurate and realistic Placebo data that doesn’t violate anyone’s right to privacy when they nurture and raise this new form of intelligence. In this way both the organic intelligence and the inorganic ( or somewhere in between as we merge with it ) is both happy. In this context, Placebo data would function similarly to how humans need oxygen to survive, and plants need carbon dioxide to survive. In this way, a human and AI ecosystem is created that benefits both parties mutually, so that we don’t have to fight for resources in surveillance age.

To keep things short, rules I view as tools a data-fed system uses. For humans, our hammers are the rules based frameworks, and food / water our data we need to grow our brains.

Historically, artificial intelligence has been thought of in terms of Narrow AI that’s very good at doing a very specific thing. The underlying mistake here is that it still treats artificial intelligence as a machine learning problem. Various talk show hosts have discussed the issue at length, so I wont retread on old ground. However the reality, at least to me, is it’s better to think of AI as a network of different machine learning processes. You would need to train multiple different algorithms that tailor to a specific narrow domain.

One example of where machine learning fails at achieving anything like a human brain, is with Decision Trees: decision trees are only good at determining whether something is to hot or to cold, and not really designed for making decisions dynamically. If I nod my head, that shouldn’t also require me to lift up my arm. You can kind of work around this problem by layering multiple different decision trees together. But unless you have a team of developers, it quickly becomes impractical to create enough data build the underlying system.

What we need is some kind of Engine Generator, similar to Jekyll, but for generator artificial intelligence subroutines, in order to automatically generate the various machine learning protocols that are needed to build something generally intelligent. And this thing needs to have some degree of artificial intelligence in its own right. In this engine I’ve built, it automatically generates the Experience Interpreter, Motivation Tree, and Action Script method slot, and you just need to fill in the actions the action script is suppose to perform.

You might still need to include other narrow AIs that the action script points to: for me I’ve pointed to the Compound Word Associational Network, among other things. The CWAN operates is a simple principle: if a is this definition, and b is this definition, then the definition for c is equal to the combined headers of a and b. I use this to generates a compound word that DuckDuckGo looks up in order to improvise a new definition.

To me the future of Artificial Intelligence should be directed toward networking multiple different algorithms together, rather than relying on purely Deep Learning, Natural Language Processing, and Decision Trees. For this purpose, I’ve developed Asagi: an engine for generating a General Purpose network of unrelated programs categorized by the pyramid of human motivations. I’ve released the base engine on this website for those interested, on my newest gitea instance:

ASAGI: https://git.privacytools.io/LWFlouisa/asagi2app

It’s currently released as version 2.1.0 on RubyGems.

This blog is for discussing the ins and outs or how to work with Asagi, an open source artificial general intelligence engine. Currently hosting my software at Github, but gradually moving my things over to Gitea.