Frequently Asked Questions
Questions and Answers
Image: PixaBay 
MODULE 1.0
Questions and Answers
 

Below you will find the questions that have been answered over the course of a decade while Hans Peter Willems was researching and developing the ASTRID system. As you can imagine, most of the obvious questions have been asked already (and many times over) and you will find the answers to those. In addition, you will find answers to some less obvious questions as well.

 

This section of the website is actively maintained and new FAQ items are added over time. This might involve new questions we encounter in our daily operations, but also questions from the past decade that we simply didn't get to yet to include here.

Please let us know if you have a question that could or even should be in here. We gladly answer and add it here.

Telegram
LinkedIn
Reddit
© 2021-2023 MIND|CONSTRUCT  
  CONTENT KEY
Organization
Development
Science
Funding
Applications
Public Media

Click on the Content Key icons to filter the Frequently Asked Questions 

Can I buy a ASTRID system

The ASTRID system, on its own, is not a standalone product. ASTRID is embeddable technology, meant to be integrated into machines and applications that need a high level of autonomous reasoning. Therefore, it is not possible to buy a "ASTRID system" but it is possible to license the ASTRID technology for implementation into end-user products.

In the coming years, MIND|CONSTRUCT will not only work with outside developers and system integrators, but we will also develop several specific end-user applications in-house. The scope of those developments will focus mainly on systems that are aimed to solve very complex problems and need the specific high-load training facilities that we developed earlier for the initial training of the ASTRID base system.

Is it safe to have Autonomous Robots in society

We don't have fully autonomous machines yet, but with technology like the ASTRID system it is imaginable that this might happen sooner than most people anticipate. Eventually, we might want to use such machines in the public space, with fully autonomous vehicles being the obvious first application. The question about safety of such machines is a legitimate one.

One of the underlying ideas (and science) behind the development of the ASTRID system is the notion that humans understand other humans because of the shared Commonsense Knowledge. This means that a machine that shares that same Commonsense Knowledge should be able to understand how to make things safe for humans.

Asimov's Robot stories explored the problems of having autonomous robots in society. In those stories, humans don't trust sentient robots and therefor they are not allowed in the public space. However, in those stories there is no evidence of the ability to test or even introspect autonomous AI brains. The question and current discussion  about the ability to explain AI-based decisions points to a future where we actually can trust such systems.

Everyone talks about the danger of Super Intelligence, how do you make it safe

Discussions on the dangers of Super Intelligence are, up till now, basically without any reasonable ground. This is because those discussions lack any definition of such a Super Intelligent system and therefor any behavior attributed to such a system has no scientific foundation of any kind.

Because we at MIND|CONSTRUCT actually know the definition of a Super Intelligent system, we also know that the dangers many are pointing at, are non-existent in a real AGI system like ASTRID. We do not need to implement convoluted solutions for safety because the ASTRID system is inherently safe by its fundamental design.