Forget the "Deep Learning" that the majority of the AI community has been developing up until now.
The path to true artificial intelligence lies in Geometric Learning.
For decades, both academia and Big Tech have been striving to mechanize the enigmatic processes we humans use to solve the problems encountered in everyday life.
How do people cope with complex problem solving? We rarely solve problems by reasoning everything out from first principles. We create heuristics.
With a nod toward the Expert Systems of the early AI era, initial approaches by academia tried to separate the knowledge of problem solving from the mechanisms that compute the problem “solution”.
But if there was one lesson that the entire AI community took away from the development of those early-era expert systems, it was the perverse difficulty in translating human symbolics into a form interpretable by a machine inferencing system.
Knowledge is not the product of problem solving, it is just the structuring scaffold for problem solving.
Which has brought us to the emergence in connectionist approaches to AI, and the popularity of the artificial neural network paradigm.
Artificial neural networks have a “black box” nature to their learning, which is certainly a large part of their academic allure, but this “deep learning” is also a great weakness.
The ANN architecture irreversibly internalizes its “knowledge”, in a suffusion of representation that cannot be arbitrarily translated to any mechanisms outside of the learned network.
Which means that artificial neural networks have great difficulty in being applied to complex, multi-step tasks.
Real-world problem solving is typically a composite affair, which may require recursion, branching decisions and recurrency, demanding an ability to form structured representations, a behavior which the ANN architecture cannot as yet demonstrate.
But putting all of their theoretical aspects aside, the ANN architecture suffers from a significant practical limitation as well, one that is manifest in the relatively high energy demands of neural network computing, and it is curious that memory access is the primary bottleneck to mitigating this significant energy logistic, and not any optimization of the core, multiply & accumulate (MAC) computation of the design, even in the so-called third-generation neural network constructions.
There is currently an industrial movement to implement the neural network architecture directly in silicon chips, and in the effort toward the optimization of these silicon foundry solutions, leveraging those operations which exhibit high parallelism within the design still faces the headwind of physics when it comes to moving a data bit from dynamic random access memory to the logic circuit performing the computing, a fundamental constraint which Nature overcame several hundreds of millions of years ago, and a constraint which the academic and Big Tech AI community is only now beginning to address.
Conceptually, this constraint becomes evident due to a disparity in the functional design of the basic computing elements of an artificial neural network itself.
Although hyped as an analog of biological neuron cells, the functionality between artificial neural network “neurons” and their biological counterparts differ in one very critical respect:
Artificial neural network elements are state devices.
Biological neuron cells are signaling devices.
Artificial neural network elements have states which represent an already abstracted “weight”, an abstraction that carries the concrete baggage of power dissipation and logistic support for external memory.
The signaling that a biological neuron demonstrates carries no predetermined abstraction, a behavior that demands no external memory, with its attendant ballast of power and logistic requirements.
It is because of this disparity that the AI community will forever be on the wrong side of the power dissipation equation, as they attempt to scale up their ANN models to deal with real world problems, as they launch their concepts from the whiteboard to operational designs using architectures that implement state devices as elementary computing units in their networks.
There is, however, one AI design that has implemented Nature’s solution to this engineering impediment.
An AI engineering plan based on the true functionality of biological neurons, a plan detailed in the First Frontier Project’s exposition of the Organon Sutra.
See the unique creation of the Organon Sutra, and the OTHER AI you have not been hearing about.
True artificial intelligence will be the greatest tool mankind can deploy to leverage TIME.
Time is the most precious commodity in existence, and everyone on the planet is given only a limited amount of it. According to the Fifth Rule of the Cryptocosm, Time is the final measure of cost because Time is what remains scarce when all else becomes abundant.
The scarcity of Time trumps an abundance of money in a linear (transactional) economy.
Creating nonlinearities in time leveraging will be the ultimate common denominator to economic power for the individual or organization deploying true artificial intelligence.
True AI machines reasoning with the intellectual dexterity of humans, combined with the supreme leveraging of Time.
First Frontiers.com
Copyright © 2021 First Frontiers.com - All Rights Reserved.
Powered by GoDaddy Website Builder
So we don't need to place any data in your browser.