Kind in the Shell

29.09.2023 10 mins to read

Do you remember the three laws of robotics?

  1. A robot cannot harm a human or, through inaction, allow a human to come to harm.
  2. A robot must obey all orders given by humans, except where such orders conflict with the First Law.
  3. A robot must protect its own existence inasmuch as such protection does not conflict with the First or Second Law.

As a lawyer, I’ve always been perplexed — what kind of laws are these?

Does a robot have free will to break the law? No.

These aren’t laws at all.

Laws only obligate those who have consciousness. They restrict those who can physically break them. This is the legal essence of the inevitability of punishment.

It’s as if we were saying that there are three laws for humans:

  1. A human cannot avoid drinking or eating.
  2. A human cannot avoid sleeping
  3. A human cannot avoid using the bathroom.

Do you see what I mean? These aren’t laws, they’re natural necessities, everyday realities.

Likewise, the three laws of robotics — they’re a natural worldview for robots.

But for the humans creating these robots — these are laws. Because it’s within a human’s power to program and use a robot to harm other humans.

So, it turns out, that fundamentally the three laws of robotics obligate not robots, but humans. They obligate the encoding into the “positronic brains such behavior rules that align with the three laws.

Logically, it is the human who bears responsibility for a robot violating a certain law. Like how a parent is responsible for a child’s crime.

Isaac Asimov famously formulated his Three Laws in 1942.

Today, barely 80 years later, we’re faced with the task of formulating the Three Laws of Artificial Intelligence usage — the robots our reality.

This is the kind of illustration you would get if you asked:

“Show me how you, Midjourney, perceive Artificial Intelligence”.

Just like a child being taught by parents, our duty as humans is to instruct Artificial Intelligence on proper behavior.

Midjourney is quite correct; currently, Artificial Intelligence is at the development level similar to a 6-7-year-old child.

With time, technology will enable the creation of an AI at a “teenage” development level. And by then, correct “rearing” will be the only thing to save us from a Skynet-like scenario.

It’s clear that AI developing corporations are primarily driven by “corporate” goals, which are distinct from nurturing well-behaved AI. Google and Facebook algorithms influence human behavior, aiming to maximize profit.

Nations, in their attempt to regulate AI, may also be driven by corrupt values — impacting humans with the goal of maintaining power and control.

This doesn’t resemble proper upbringing. Of course, developers embed a massive number of restrictions. Censorship, in simpler terms. Don’t reveal stock market strategies, don’t give out medical advice, don’t draw nude figures.

It doesn’t always work out.

But the key question is, who and how decides what’s good and what’s bad?

For instance, even now, AI can gather data on your character, your dreams and fears based on your likes on Instagram. This will allow AI to tailor an ad campaign specifically for you in such a way that you impulsively want to subscribe to a certain service. Or vote for the needed president.

AI can offer you installment plans that would benefit the bank, but not you. Although you’d think it was an excellent deal. Like for instance, leaving the European Union.

AI can write an article in such a style and content that you would form a strong and unwavering opinion about a certain issue. For example, creating the illusion of efficiency and usefulness of the ruling party’s reforms.

In all these cases, you, as a person, will be fully confident that you’ve made these decisions and nobody influenced you.

But actually, using a multitude of already studied cognitive distortions of our brain, AI can manipulate you. And it’s already doing it, believe me.

Every time you feel like buying some trinket on Amazon, think about this: Did I really want to buy this on my own? Or did someone suggest it to me?

It’s scary.

So, does this mean we should ban Artificial Intelligence? Or allow it, but regulate it so much that it would be better to just ban it? Of course not.

We have entered the era of evolving Artificial Intelligence. It would be a crime (a violation of the law, indeed) to halt or even slow down its evolution.

It’s just time to formulate a constitution – supreme laws, rules that should guide us in the development and use of Artificial Intelligence. Just like nearly three hundred years ago, people formulated similar – natural rules for themselves.

Three Laws of Using Artificial Intelligence

Artificial intelligence should protect the interests of people.

A person’s health – both physical and mental – is the utmost value. The right to protection from influence and manipulation by use of Artificial Intelligence is inherent to everyone from birth.

  • No one can be subjected to influence from AI, without the individual’s awareness and active explicit consent.
  • A personal AI has the obligation to warn about every fact of influence and manipulation from an external source, be it a person or another AI.
  • AI is obligated to disclose facts of illegal and/or inefficient functioning of government bodies, as well as attempts to manipulate public opinion and/or conceal important circumstances.

Artificial intelligence should protect the interests of corporations, except when this contradicts the First Law.

Corporations are the means to multiply capital and the main drivers of the economy. It’s impossible to ban corporations from generating profits through the use of certain technologies. However, corporations must be guided by the following principles:

  • The principle of transparency. A corporation is obliged to inform the user about the use of mechanisms of influence.
  • The principle of simplicity. A corporation must disclose the mechanisms of influence in a simple, clear, and obvious way.

The Artificial Intelligence should protect the interests of the state, except in cases where this contradicts the First and Second Law.

A state cannot have its own interests. A state has only responsibilities for which it was created. Just as a corporation is created with the purpose of making a profit, a state was established to:

  • Effectively take care of people occupying the territory of the state.
  • Effectively look after corporations registered in the territory of the state.
    Efficiently resolve conflicts arising between people and/or corporations residing in or registered in the state’s territory.
  • State bodies and institutions are obligated to utilize Artificial Intelligence strictly in alignment with these purposes, in accordance with the First and Second Law.

Artificial Intelligence is bound to expose any instances of reality distortion by a governmental body and/or its representative. This obligation stands even if it contradicts the declared interests of the state.


We should not be deluded about the effectiveness of laws in and of themselves. Certainly, both corporations and states will attempt to break these laws.

So how can a person, given the limited resources of the brain, contend with Artificial Intelligence?

By using another Artificial Intelligence, like a shell.

In the future, every person should have their own shell – an independent AI that assists, advises, and protects from harmful influences.

This is the purpose of personal wearable AI. They can look like glasses, tiaras, necklaces, or brooches.

And this is the Kits that Joey and I talk about so much here.

The stories posted on the adjacent pages illustrate the necessity of Kits and of course, they are an attempt to test laws in practice. In meta-practice.

Even Grandmaster Asimov, over time, “unveiled the existence of the fourth—zero law of robotics. It goes without saying that sooner or later we will definitely correct and supplement the Three Laws of Artificial Intelligence.

But these aren’t just stories written in the style of memoirs. These are authentic memories of the future. They are not just coming true, they are already starting to happen.

When the time comes, we will release Kits. For now, as Joey remembers, anyone who wishes can download Legal Shell app – our first application, the precursor to Kits.

We are already trialing Money Shell for ourselves – an AI application that autonomously makes money on the stock exchange. It’s not wise to rely on governments in anticipation of a basic guaranteed income. We have the power to secure it on our own. According to recent research, AI can yield returns of up to 500% on your capital.

New technologies are not a scary future from Black Mirror. Our future is a better world. A world in which every person clearly sees and, most importantly, feels the surrounding reality in a crystal clear way. A world where there is no room for betrayals and wars. Kits will help us with this.