loader image

Systemic shortcomings of classical AI

Inherent defect of neural networks

The development paradigm of existing neural networks was initially erroneous. It is this innate error that is the main reason for the slowdown in the development of artificial intelligence and its extensiveness. We call this the Rosenblatt Fallacy – an erroneous postulate describing the design of a formal neuron as being limited to a single weight per synapse, which was in line with biologists’ long outdated notions.

Systemic shortcomings of classical AI

Intrinsic limitations of existing ANNs

This error in the original structure of neural networks gives rise to inevitable limitations that are becoming ever more tangible and ever more critical. With the increase in the volume of data and the complexity of the task, the need for processing power of neural networks on ordinary neurons increases exponentially. Together with them, training time, energy consumption, the cost of human resources and other costs grow exponentially. The cost of training neural networks is becoming prohibitive.

Systemic shortcomings of classical AI
  • Long training time (hours – years) depending on the size of ANN and quantity of training records
  • Requirement of large computing capacity for complex tasks
  • Complicated training algorithms and weight adjustment
  • Non-scalable
  • Adding a new image into trained ANN requires retraining of entire ANN
  • Sporadic ANN paralysis during training


This inherent defect and these intrinsic limitations have led to a slowdown in the field of artificial intelligence. There are several reasons for this:

  1. Access monopolization:
    1. Requires powerful computing hardware and expensive infrastructure.
    2. Accessible only to large companies and giant monopolies.
    3. Chronic shortage of computing power with exponentially growing needs.
    4. Processing data at third party facilities carries the risk of data confidentiality breaches.
  2. High costs:
    1. Enormous energy and resource intensity and gigantic emissions.
    2. Difficulty in integration, use and support.
    3. Vulnerability and low reliability: failures, errors, external influences.
    4. Process opacity: neural network training occurs through trial and error.
  3. Poor performance:
    1. Low training speed, training acceleration is very slow.
    2. The need for large amounts of data and training examples.
    3. The effect of catastrophic ‘forgetting’ and ‘freezing’ as tasks become more complex.
    4. Impossibility of up-training.

Experts’ disappointment with existing ANNs

Geoff Hinton, one of the most prominent AI scientists, recently said – “we need to start over when he explained he is ’deeply suspicious’ of current AI techniques. ’My view is throw it all away and start again’.”

Francois Chollet, a leading practitioner of deep learning networks, has concluded – “You cannot achieve general intelligence simply by scaling up today’s deep learning techniques.”
Source: The Secret to Strong AI by Jeff Hawkins, Co-Founder at Numenta