OpenCog
OpenCog is an open-source software project to build a human-level artificial general intelligence (AGI). The name "OpenCog" is derived from "open", meaning open source, and "cog", meaning cognition. OpenCog doesn't emulate the human brain in any detail. Instead it uses currently available computer hardware to run software that draws inspiration from neuroscience, cognitive psychology, and computer science. The assumption is that the human brain is only one particular way of achieving general intelligence and that other methods are just as viable. The project is led by Ben Goertzel. The roadmap targets reaching human-level AGI by the end of 2021.
Latest news
News to follow...
General introduction
Info to follow...
Cognitive architecture
Central to OpenCog is a neural/semantic memory system called the AtomSpace. This is a knowledge base that contains a large number of atoms and links between atoms. Each atom represents something to be remembered, including both physical objects and abstract concepts. For example, there might be a "table" atom, a "chair" atom, and a "lunchtime" atom. Links between these three atoms might represent the memory that sitting down at a table is often associated with lunchtime.
Working on this knowledge base are a number of algorithms, or cognitive processes, which are called MindAgents. These are software objects that act on the AtomSpace to modify, add, or remove atoms and links. In this respect OpenCog can be thought of as a blackboard architecture. The MindAgents are scheduled by a scheduler. The simplest scheduler just cycles through the MindAgents allocating processor time one after the next.
A core principle of OpenCog is that there is no single algorithm that is responsible for intelligence. Rather, there are a large number of different specialised algorithms and these work closely together in cognitive synergy. The MindAgents help each other out and the total becomes more than the sum of its parts. Higher level features of cognition are not explicitly built into the system but are expected to emerge. There is no "self" or "free will" atom, for example.
AtomSpace
The AtomSpace is a labelled, weighted, hypergraph containing many atoms and links. The atoms can be of many different types. Some correspond to the recognition of physical objects (e.g. chair, table, ball), others correspond to procedures (e.g. "sit down", "raise arm"). There are also concept atoms, as well as atoms corresponding to particular emotional states. Although all atoms are labelled, many don't have humanly-readable names. These are generally things that have been learnt and labelled with an automatically created ID number.
Each atom comes with an AttentionValue object which specifies how much memory and processor time the atom should get. The ShortTermImportance (STI) attention value defines how much processor time the atom should get in near future. The LongTermImportance (LTI) value is how important it is to retain the atom in memory. The VLTI bit specifies whether the atom should be saved to disk if it is removed from RAM.
The links between atoms also come in various types. For example, there are logical, fuzzy, and association links. Hebbian links measure how often two atoms have been associated. Links are assigned a TruthValue which specifies how strong the link is. This value can be a fixed number or a probability distribution with variable upper and lower bounds.
Knowledge representation
Explicit knowledge is represented by the atoms. One atom represents one piece of knowledge. Implicit knowledge is represented by a "map" of multiple linked atoms. Map encapsulation can occur when a new atom is created to represent an existing map of implicit knowledge. This happens often and lots of new atoms are created all the time. Many of them are nonsense, however, so newly created atoms are programmed to decay over time, unless they're refreshed. This way the important atoms bubble up and become persistent while irrelevant atoms are forgotten.
MindAgents
A MindAgent is an algorithm, or group of algorithms, that acts on the AtomSpace to execute a particular cognitive process. Some examples of MindAgents are:
- Object recognition
- Language comprehension
- Attention allocation
- Forgetting agent - the strength of links between atoms decays over time, possibly dependant on how much the links are used
- Importance updating - atom importance variables are updated using probabilistic inference
- Concept formation - creates speculative, potentially new interesting concept nodes
- Clustering - creates concept nodes representing clusters of existing concept nodes
- Credit assignment - Given a goal, figure out which procedures' execution, and which atoms' importance, can be expected to lead to the goal's achievement
The activities of the MindAgents are scheduled by a scheduler, the simplest of which just cycles through them one after the next. In theory any collection of MindAgents could be selected and put to work on the AtomSpace. The art of getting the OpenCog architecture to fulfil its goals is to get the right mindagents working together in the right way.
Memory types
Info to follow...
OpenPsi
OpenPsi is the component of OpenCog which governs the intelligent agent's motivations, its basic drives, its emotions, and its decisions on which actions to take. It is based on Psi-Theory, which was developed by the German psychologist Dietrich Dörner. Psi-Theory states that animal behaviour is driven by five basic needs, as follows:
- Existence preservation - food, water, body integrity (i.e. avoidance of pain)
- Species preservation - sexuality, reproduction
- Affiliation - the need to belong to a group, social interaction
- Certainty - the need to be able to predict events and their consequences
- Competence - the capability to master problems and tasks, including satisfying one's needs
Each of these drives continually increases over time. A drive is reduced when satisfied by an action. The five drives are constantly competing with each other for attention. The drive with the highest urgency is the one which will be selected for action first. After that particular drive is satisfied, another drive will then take priority. For example, if an agent hasn't had a drink for a while, its drive for water will be high, thus its action selection will be to seek out water. Once its thirst has been quenched, the drive to reproduce or seek out social interaction might take over. Once this is taken care of, the drive for water might take over again. And so the basic perception-cognition-action cycle continues.
The implementation of OpenPsi is based on Joscha Bach's work with MicroPsi. The diagram above-right outlines the MicroPsi architecture. The original MicroPsi was written in Java and the source code is publicly available on Google. Work is ongoing to integrate this into OpenCog.
DeSTIN
DeSTIN is an acronym for Deep SpatioTemporal Inference Network and is a scalable deep learning architecture that relies on a combination of unsupervised learning and Bayesian inference. A 2009 paper by the inventors of this method is here (PDF). DeSTIN was invented by Itamar Arel, Derek Rose, and Robert Coop at the Machine Intelligence Lab at the University of Tennessee.
DeSTIN is similar to Heirarchical Temporal Memory (HTM) developed by Jeff Hawkins and Dileep George at Numenta, Inc. Dileep George has since left Numenta to set up the similar research company Vicarious. In this mailing list discussion Ben Gortzel describes Google's deep learning methods as also being similar to DeSTIN.
For visual and auditory pre-processing OpenCog uses the DeSTIN deep network architecture. Deep networks can be thought of similar to neural networks. They are layers of nodes (neurons) connected by links (dentrites and synapses). Input goes into the first layer, and is passed up through the layers, the representation become more abstract and higher level as it goes. That is to say, the first layer can only recognise edges, the next layer recognises lines and simple shapes, the third layer recognises objects. The mammalian cerebral cortex has six layers, although that doesn't necessarily mean six layers of neurons, there can be more than one neuron vertically stacked within a layer. In the original DeSTIN, four layers were used to classify handwritten single letters. DeSTIN is only used for the preprocessing. For higher level cognition they use other models.
Probablistic Logic Networks (PLN)
Info to follow...
MOSES
Info to follow...
RelEx
Info to follow...
Other components
Info to follow...
Software architecture
The OpenCog source code is freely available and anyone can download it. It can be compiled and run on Unix and OS X. There is currently no Windows version. There is also currently no demo or tutorial available to play with, although the team would like to create one at some point.
More info to follow...
Virtual world
Info to follow...
Robotics
Info to follow...
History / roadmap
OpenCog started in 2008, extracting and cleaning and open sourcing a certain percentage of code from the Novamente Cognition Engine (NCE) NCE was developed 2001 to 2007, Ben's AI consulting company - which in turn was inspired by Webmind AI Engine developed 1997-2001 by Webmind Inc. - another company founded by Ben - story of Webmind: http://www.goertzel.org/benzine/WakingUpFromTheEconomyOfDreams.htm - so basically, Ben's been working on it since 1997 Initial development: 2008-2010, with grant from Singularity Institute - extracting/cleaning/open sourcing some of Novamente Cognition Engine - knowledge store, scheduler - PLN probabilistic learning system - MOSES automated program learning system (deleveloped by Moshe Looks) - RelEx language comprehension system - and more (what, exactly? Fishgram? ...) Uses Piagetan stages of development Piaget's theory of cognitive science http://en.wikipedia.org/wiki/Piaget's_theory_of_cognitive_development Five stages from infant to adult: infantile, concrete, formal, reflexive, full self-modification Currently 2010-2012 - infant-level intelligence for video game virtual world characters (piagetan infantile) - simple English language dialog, answering questions, taking instructions - integration with DeSTIN for object and event recognition in images/video - release of OpenCog v1.0 - 2-year funded project from hong kong government (2011-2012) - Unity-based minecraft-like world built - proxy to unity built, 3d pathfinding - PLN body-control systems debugged/refactored, PLN based planner built - OpenPsi motivation/emotion system implemented, basically lots of stuff with the virtual world 2013-2014 - a complete, integrated proto-AGI mind, toddler level, (piagetan concrete) - collective cognition - multiple intelligent robots share knowledge, learn by copying - robot control, motion planning, testing in the BLISS robot lab (china, xiamen) with Nao platform - experiential language learning - system to let opencog instances synchronise with a central store on initialisation - opencog v2.0 2015-2016 - advanced learning and reasoning (piagetan formal) - humanoid robots outside lab in rich environments - using for initial eperimentaion with lab equipment (eg gene sequencers) - initial experimentation with mathematical theorem proving - all this is towards making artificial scientists and service robots and useful stuff 2017-2018 - AGI experts - creation of artificial scientist, in lab on its own, desiging experiments - service robot, household tasks, driven by English-language communication - virtual assistant, accompanies its user in online spaces/augmented realities - sounds like the Google Assistant 2019-2021 - full-on human level AGI - can't emulate humans in full detail, but does general stuff that humans do 2021-2023 - advanced self-improvement (piagetan reflexive) - can significantly edit its own code base - training in further areas of science, industry, etc. - can inspect it's own memory and source code, and can modify (humans can't do this to themselves, computers can)
Practical applications
OpenCog is being used in practical projects
- video games, info retrieval, biomedical informatics (analysing DNA data, anlysing genetics of long-lived fruit flies
- see "ais, superflies, and the path to immortality"), hedge fund / financial predictions on hong kong stock exchange.
animal childlike AGI + useful narrow AI = useful early-stage AGI
Funding
2008 supported with a grant from the Singularity University (now discontinued).
Two-year funded project from Hong Kong government (2011-2012).
Support from Novamente LLC, which is Ben's own company
In August 2011 Ben mentioned working on a proposal for 4-year funding 2012 to 2015 to various funding people
xiamen university bliss lab - Chinese government funding, presumably
- Hugo de Garis was previously here
- http://bliss.xmu.edu.cn/
Videos
Two-hour presentation by Ben Goertzel, August 2011:
More videos:
- Latest - September 2012
- AGI overview / interview - September 2012
- Virtual world
- 30 min video filmed in Hong Kong
People involved
OpenCog developers can be broadly categorised into three groups:
- Around six people, including project leader Ben Goertzel, at the M-Lab at Hong Kong Polytechnic University
- Researchers at the BLISS lab at Xiamen University
- Others from the worldwide developer community
Weblinks
- OpenCog homepage
- Google Groups mailing list
- Wikipedia article
- Twitter feed
- Facebook page
- Google+ page
- Source code on Launchpad - current repository
- Source code on GitHub - planned future repository