A radical  new idea
Image: Engin Akyurt 
MODULE 1.0
A radical new idea
 

It has been stated many times over, during the last several years, that we need a radical new idea to achieve Artificial General Intelligence. Yann LeCun (Chief AI Officer, Facebook) said in many interviews that we not only don't have the technology to build AGI; we don't even have the science yet.

Obviously, we need a radical new idea. Deep Learning, currently the leading Artificial Intelligence technology, is lacking several key ingredients needed to build a system that is capable of learning directly from experiences. Deep Learning is also unable to reason based on causality and in the face of uncertainty. Solving these key issues in the Deep Learning domain isn't even on the horizon yet. It might even turn out to be impossible to implement these features in a Deep Learning system.

 

Hans Peter once stated, “Simplicity is infinitely more scalable than Complexity”, a statement that is more important than it seems at first. To build a system that can scale to incredible capabilities, its core functionality should be so elementary that it is actually simple. This is certainly a Radical New Idea in a world where Machine Learning/Deep Learning is one of the most complex and energy hungry technologies on the planet.

Here at MIND|CONSTRUCT we not only developed the science, we also did build the technology because the best way to prove your science is to implement it into working technology. This technology will be inducive into every technological domain.

This technology is now available to prospective partners and clients to be implemented in tailor made applications for specific application domains.

Telegram
LinkedIn
Reddit
© 2021-2023 MIND|CONSTRUCT  
The symbolic  advantage
Image: PixaBay 
MODULE 1.1
The symbolic advantage
 

Back in the seventies, when Deep Learning wasn't invented yet (although Neural Networks were), Symbolic Artificial Intelligence was the leading technology in the field, and it held lots of promise for the future of AI. It was building on decades of scientific research and already found its way into serious applications, by that time mainly known as Expert Systems. Some of those are still in use today.

Some of the most impressive feats that those systems already could perform were Causal Reasoning and handling of (symbolic) information to describe the world in detail. The only real hurdle that they needed to take to get to General Intelligence was to somehow get these systems to acquire their information in an automated way. Not only because typing in all this information manually takes even a large group of people decades, but also because for real Artificial General Intelligence, you need the systems to be able to update its information constantly.

 

The problem of self-learning somehow appeared to be much harder than initially thought. When eventually Deep Learning came along, mainly made possible by the advent of the Internet and Social Media delivering the Big Data needed, they basically dropped Symbolic AI, and declared making it self-learning to be impossible.

The Symbolic approach had (and still has) so many advantages over other systems (being mainly Deep Learning) that it is strange that we dropped all that promise of high-level reasoning and Symbolic brain models (that now has proven to be eerily close to how our brain stores information) for a system that is only somewhat trainable but lacks everything else.

Although it is almost publicly ridiculed to choose Symbolic AI as the basis for development, it seems more logical to solve just the self-learning problem (and get everything else for free) than to try to solve everything else in Deep Learning.

 
  Internal Papers & Reports
  • Self-learning Symbolic AI: A critical appraisal - Hans Peter Willems (2021) 

Telegram
LinkedIn
Reddit
© 2021-2023 MIND|CONSTRUCT  
The Deep Learning Disadvantage
Image: Manuel Geissinger 
MODULE 1.2
The Deep Learning Disadvantage
 

Deep Learning is currently the prevalent technology in Artificial Intelligence R&D. From that fact alone, it seems that this is the way forward towards higher levels of intelligence in machines. However, Deep Learning and Artificial Neural Networks, the underlying scientific paradigm, are handicapped by several disadvantages and other issues.

The first and very visible issue is the fact that Deep Learning needs large amounts of CPU power for training. This is not only very costly regarding the incredible amounts of server hardware needed, but also in regard to power consumption. Both the power issue and the space needed for all those machines makes it impossible to implement such a system in any usable way into an autonomous mobile solution.

 

The second obvious problem is the fact that Deep Learning needs large datasets (Big Data) for each and everything it needs to learn. As soon as a small fragment of the learned pattern is changing, it needs a new dataset and retraining. In addition to this problem, a Deep Learning system can only be trained for one purpose at the time. It is impossible to train a system to be good at both playing Chess and recognizing Cats in pictures. This rigidness is one of the prevalent obstacles to build AGI with Deep Learning.

Finally, Deep Learning currently lacks the capability of integrating knowledge into a complex world view. Instead of enriching the knowledge about a concept, needed for Commonsense Knowledge Acquisition, Deep Learning does the opposite and brings a concept down to the most basic pattern of that concept. This is also the main reason to classify Deep Learning systems as being brittle.

 
  External Articles
  • In defense of skepticism about deep learning 
  • PROBLEMS WITH DEEP LEARNING 
  • We can’t trust AI systems built on deep learning alone 
  External Papers & Reports
  • Deep Learning: A Critical Appraisal - Gary Marcus 

Telegram
LinkedIn
Reddit
© 2021-2023 MIND|CONSTRUCT