Software Enables Experimentation Into Diverse AI Algorithms to Create an End-To-End Artificial General Intelligence System
Artificial Intelligence (AI) expert, serial entrepreneur, and noted software developer, Charles Simon, CEO of FutureAI has launched Brain Simulator II, a software platform for proving how Artificial General Intelligence (AGI) – the next phase of AI – will emerge.
Brain Simulator II enables experimentation into diverse AI algorithms to create an end-to-end AGI system with modules for vision, hearing, robotic control, learning, internal modeling, and even planning, imagination, and forethought.
“New, unique algorithms that directly address cognition are the key to helping AI evolve into AGI,” Simon explains.
“Sallie, the Brain Simulator’s artificial entity, can move about, see, touch, and smell objects, and learn to understand speech within a simulated environment,” he explains. “She can even learn to navigate mazes using landmarks, the way a child might. This new research project advances new algorithms to simulate biological neuron circuits coupled with high-level artificial intelligence techniques.”
Pointing out that current examples of AI, such as ALEXA and SIRI, fall short in the area of reasoning about time and space – things any three-year-old can do – Sallie, furthers existing technologies. She uses binocular vision to estimate distances, and learns words associated with her perceived environment, and learns to speak using baby-talk like a toddler.
According to Simon, all of Sallie’s information is combined in a Universal Knowledge Store which captures any kind of information with biologically plausible techniques. Relating information from multiple senses then is key to comprehending that objects exist in a physical environment an insight which underlies all human thought.
“Brain Simulator II combines vision and touch into a single mental model and is making progress toward the comprehension of causality and the passage of time,” Simon notes. “As the modules are enhanced, progressively more intelligence will emerge.”
Brain Simulator II marries Neural Network and Symbolic AI techniques to create unbounded possibilities. It creates an array of millions of neurons interconnected by any number of synapses.
Recommended AI News: The Future Of Business Communications
Further, any cluster of neurons can be collected as a “Module” which can execute any desired background programming. For example, Brain Simulator II can integrate neural network recognition techniques with symbolic AI software structures efficiently creating a relational knowledge store.
Other features of Brain Simulator II include:
- The availability of millions of neurons via a desktop computer;
- The ability to create networks from scratch or from a library of functions;
- More than 20 module types for speech, vision, and robotic controls;
- Universal Knowledge Store modules, capable of storing any kind of information and synapses;
- The ability to write modules in any language supported by Microsoft’s .NET platform.
Brain Simulator II’s simple 2D and 3D simulators sidestep complexity starting with just a few object types, a few possible attributes, and a few relationships. Once these work well, modules can be expanded to learn new objects, attributes, and relationships as they are encountered within the environment.
Recommended AI News: Cloud Computing Versus Edge Computing