Systems you can Trust
Image: PixaBay 
MODULE 1.0
Systems you can Trust
 

There is a lot of discussion (and speculation) about the safety, or lack thereof, of Artificial Intelligence. As Deep Learning is the prevailing AI technology at this moment, these concerns are linked to that technology. One of the problems with Deep Learning systems is that they have no real possibility for introspection. It is very hard to determine why or how a Deep Learning system 'decides' something.

The ASTRID technology is radically different from Deep Learning in this respect: It uses Natural Language to learn about the world and also for creating its internal world model. That means that we can actually inspect the internal world model and even 'repair' things if needed. Within the ASTRID project we developed the tools to do "full brain introspection" and additional analysis of conceptual constructs therein.

 

In addition to the 'readable' brain, the ASTRID system has the Emotional Bias Response Model (EBRM) that governs all the system's actions, regardless of them being intentional or driven by insights that the machine uncovered. This is basically a form of Fundamental Intention Modelling that makes sure that the machine will always question the validity, morality and impact of its own actions (and those of others).

Instead of us humans making sure that the machine behaves safely, it will constantly determine itself if it is doing so, and taking corrective action if needed.

Telegram
LinkedIn
Reddit
© 2021-2023 MIND|CONSTRUCT  
Real Understanding
MODULE 1.1
Real Understanding
 

Computers can't really understand anything.... until now.

When we talk about Artificial Intelligence we tend to use terms like 'understanding' and 'intention', while in reality there is none of both in the machine. Computers run programs that consist of rules that are checked and when a rule fires the associated action is started and something gets done. The machine doesn't have to understand anything as it is just executing the commands.

The ASTRID system is also a computer program and consists of rules and actions like any other computer program. But there is an intrinsic difference between ASTRID and traditional software: The ASTRID system works by building conceptual understanding.

 

Where traditional software has rules that fire based on specific states in our world, and actions that act directly on those and/or other states in our world, The ASTRID system has only rules and actions that help to build a model of the world, and the system has to figure out what the rules and actions are to react to this model of the world.

Like humans, when the internal model differs from the reality, this either results in changing the internal model (something is learned), or an action is undertaken to bring reality in sync with the internal model (trying to achieve a goal). This is obviously only possible in a machine when you have an ASTRID-like system that is capable of directly learning from new input.

Telegram
LinkedIn
Reddit
© 2021-2023 MIND|CONSTRUCT  
MODULE 1.2
Full Autonomy Explained

 

A COMPLEX WORLD VIEW

LEARN FROM EXPERIENCE

HANDLE ANALOGIES

A fully autonomous system needs to understand the world at fundamental levels of reality.

We know that humans have an internal world model that constantly gets updated by new experiences and insights. This is known as Commonsense Knowledge. Humans also use their internal representation of the world to steer actions in the real world. Therefore, having an internal representation of the world that gets updated dynamically, is a fundamental requirement for full autonomy.

Building an internal world model can only be achieved by continuous incremental learning, as it is impossible to build such a representation manually. And even if that would be possible, the resulting model would be very rigid and lack any dynamic properties.

Machine Learning based on Neural Networks (known as Deep Learning) also won't work, as training such a system for each small change in the environment would obviously take too long. Besides that, procuring big datasets for each small change is factually impossible.

As with humans, an internal world model can encounter situations that are completely new. Without prior Commonsense Knowledge, the only way to instantly handle such situations is by using analogous information to make useful inferences in the face of uncertainty.

The capacity to handle analogies scales proportional with the amount of trained/learned Commonsense Knowledge about the world.

Telegram
LinkedIn
Reddit
© 2021-2023 MIND|CONSTRUCT