This inherent defect and these intrinsic limitations have led to a slowdown in the field of artificial intelligence. There are several reasons for this:
- Access monopolization:
- Requires powerful computing hardware and expensive infrastructure.
- Accessible only to large companies and giant monopolies.
- Chronic shortage of computing power with exponentially growing needs.
- Processing data at third party facilities carries the risk of data confidentiality breaches.
- High costs:
- Enormous energy and resource intensity and gigantic emissions.
- Difficulty in integration, use and support.
- Vulnerability and low reliability: failures, errors, external influences.
- Process opacity: neural network training occurs through trial and error.
- Poor performance:
- Low training speed, training acceleration is very slow.
- The need for large amounts of data and training examples.
- The effect of catastrophic ‘forgetting’ and ‘freezing’ as tasks become more complex.
- Impossibility of up-training.
Experts’ disappointment with existing ANNs
Geoff Hinton, one of the most prominent AI scientists, recently said – “we need to start over when he explained he is ’deeply suspicious’ of current AI techniques. ’My view is throw it all away and start again’.”
Francois Chollet, a leading practitioner of deep learning networks, has concluded – “You cannot achieve general intelligence simply by scaling up today’s deep learning techniques.”
Source: The Secret to Strong AI by Jeff Hawkins, Co-Founder at Numenta