Last updated: Sep 28, 2012

OpenCog is an open-source software project to build a human-level artificial general intelligence (AGI). The name "OpenCog" is derived from "open", meaning open source, and "cog", meaning cognition. OpenCog doesn't emulate the human brain in any detail. Instead it uses currently available computer hardware to run software that draws inspiration from neuroscience, cognitive psychology, and computer science. The assumption is that the human brain is only one particular way of achieving general intelligence and that other methods are just as viable. The project is led by Ben Goertzel. The roadmap targets reaching human-level AGI by the end of 2021.

Contents    (hide)

Latest news

News to follow...

General introduction

Info to follow...

Cognitive architecture

Central to OpenCog is a neural/semantic memory system called the AtomSpace. This is a knowledge base that contains a large number of atoms and links between atoms. Each atom represents something to be remembered, including both physical objects and abstract concepts. For example, there might be a "table" atom, a "chair" atom, and a "lunchtime" atom. Links between these three atoms might represent the memory that sitting down at a table is often associated with lunchtime.

Working on this knowledge base are a number of algorithms, or cognitive processes, which are called MindAgents. These are software objects that act on the AtomSpace to modify, add, or remove atoms and links. In this respect OpenCog can be thought of as a blackboard architecture. The MindAgents are scheduled by a scheduler. The simplest scheduler just cycles through the MindAgents allocating processor time one after the next.

A core principle of OpenCog is that there is no single algorithm that is responsible for intelligence. Rather, there are a large number of different specialised algorithms and these work closely together in cognitive synergy. The MindAgents help each other out and the total becomes more than the sum of its parts. Higher level features of cognition are not explicitly built into the system but are expected to emerge. There is no "self" or "free will" atom, for example.


The AtomSpace is a labelled, weighted, hypergraph containing many atoms and links. The atoms can be of many different types. Some correspond to the recognition of physical objects (e.g. chair, table, ball), others correspond to procedures (e.g. "sit down", "raise arm"). There are also concept atoms, as well as atoms corresponding to particular emotional states. Although all atoms are labelled, many don't have humanly-readable names. These are generally things that have been learnt and labelled with an automatically created ID number.

Each atom comes with an AttentionValue object which specifies how much memory and processor time the atom should get. The ShortTermImportance (STI) attention value defines how much processor time the atom should get in near future. The LongTermImportance (LTI) value is how important it is to retain the atom in memory. The VLTI bit specifies whether the atom should be saved to disk if it is removed from RAM.

The links between atoms also come in various types. For example, there are logical, fuzzy, and association links. Hebbian links measure how often two atoms have been associated. Links are assigned a TruthValue which specifies how strong the link is. This value can be a fixed number or a probability distribution with variable upper and lower bounds.

Knowledge representation

Explicit knowledge is represented by the atoms. One atom represents one piece of knowledge. Implicit knowledge is represented by a "map" of multiple linked atoms. Map encapsulation can occur when a new atom is created to represent an existing map of implicit knowledge. This happens often and lots of new atoms are created all the time. Many of them are nonsense, however, so newly created atoms are programmed to decay over time, unless they're refreshed. This way the important atoms bubble up and become persistent while irrelevant atoms are forgotten.


A MindAgent is an algorithm, or group of algorithms, that acts on the AtomSpace to execute a particular cognitive process. Some examples of MindAgents are:

  • Object recognition
  • Language comprehension
  • Attention allocation
  • Forgetting agent - the strength of links between atoms decays over time, possibly dependant on how much the links are used
  • Importance updating - atom importance variables are updated using probabilistic inference
  • Concept formation - creates speculative, potentially new interesting concept nodes
  • Clustering - creates concept nodes representing clusters of existing concept nodes
  • Credit assignment - Given a goal, figure out which procedures' execution, and which atoms' importance, can be expected to lead to the goal's achievement

The activities of the MindAgents are scheduled by a scheduler, the simplest of which just cycles through them one after the next. In theory any collection of MindAgents could be selected and put to work on the AtomSpace. The art of getting the OpenCog architecture to fulfil its goals is to get the right mindagents working together in the right way.

Memory types

Info to follow...


OpenPsi is the component of OpenCog which governs the intelligent agent's motivations, its basic drives, its emotions, and its decisions on which actions to take. It is based on Psi-Theory, which was developed by the German psychologist Dietrich Dörner. Psi-Theory states that animal behaviour is driven by five basic needs, as follows:

  • Existence preservation - food, water, body integrity (i.e. avoidance of pain)
  • Species preservation - sexuality, reproduction
  • Affiliation - the need to belong to a group, social interaction
  • Certainty - the need to be able to predict events and their consequences
  • Competence - the capability to master problems and tasks, including satisfying one's needs

Each of these drives continually increases over time. A drive is reduced when satisfied by an action. The five drives are constantly competing with each other for attention. The drive with the highest urgency is the one which will be selected for action first. After that particular drive is satisfied, another drive will then take priority. For example, if an agent hasn't had a drink for a while, its drive for water will be high, thus its action selection will be to seek out water. Once its thirst has been quenched, the drive to reproduce or seek out social interaction might take over. Once this is taken care of, the drive for water might take over again. And so the basic perception-cognition-action cycle continues.

The implementation of OpenPsi is based on Joscha Bach's work with MicroPsi. The diagram above-right outlines the MicroPsi architecture. The original MicroPsi was written in Java and the source code is publicly available on Google. Work is ongoing to integrate this into OpenCog.


DeSTIN is an acronym for Deep SpatioTemporal Inference Network and is a scalable deep learning architecture that relies on a combination of unsupervised learning and Bayesian inference. A 2009 paper by the inventors of this method is here (PDF). DeSTIN was invented by Itamar Arel, Derek Rose, and Robert Coop at the Machine Intelligence Lab at the University of Tennessee.

DeSTIN is similar to Heirarchical Temporal Memory (HTM) developed by Jeff Hawkins and Dileep George at Numenta, Inc. Dileep George has since left Numenta to set up the similar research company Vicarious. In this mailing list discussion Ben Gortzel describes Google's deep learning methods as also being similar to DeSTIN.

For visual and auditory pre-processing OpenCog uses the DeSTIN deep network architecture. Deep networks can be thought of similar to neural networks. They are layers of nodes (neurons) connected by links (dentrites and synapses). Input goes into the first layer, and is passed up through the layers, the representation become more abstract and higher level as it goes. That is to say, the first layer can only recognise edges, the next layer recognises lines and simple shapes, the third layer recognises objects. The mammalian cerebral cortex has six layers, although that doesn't necessarily mean six layers of neurons, there can be more than one neuron vertically stacked within a layer. In the original DeSTIN, four layers were used to classify handwritten single letters. DeSTIN is only used for the preprocessing. For higher level cognition they use other models.

Probablistic Logic Networks (PLN)

Info to follow...


Info to follow...


Info to follow...

Other components

Info to follow...

Software architecture

The OpenCog source code is freely available and anyone can download it. It can be compiled and run on Unix and OS X. There is currently no Windows version. There is also currently no demo or tutorial available to play with, although the team would like to create one at some point.

More info to follow...

Virtual world

Info to follow...


Info to follow...

History / roadmap

OpenCog started in 2008, extracting and cleaning and open sourcing a certain percentage
of code from the Novamente Cognition Engine (NCE)
NCE was developed 2001 to 2007, Ben's AI consulting company
  - which in turn was inspired by Webmind AI Engine developed 1997-2001 by Webmind Inc.
  - another company founded by Ben
  - story of Webmind:
  - so basically, Ben's been working on it since 1997

Initial development: 2008-2010, with grant from Singularity Institute
 - extracting/cleaning/open sourcing some of Novamente Cognition Engine
 - knowledge store, scheduler
 - PLN probabilistic learning system
 - MOSES automated program learning system (deleveloped by Moshe Looks)
 - RelEx language comprehension system
 - and more (what, exactly? Fishgram? ...)

Uses Piagetan stages of development
Piaget's theory of cognitive science's_theory_of_cognitive_development
Five stages from infant to adult: infantile, concrete, formal, reflexive, full self-modification

Currently 2010-2012
 - infant-level intelligence for video game virtual world characters (piagetan infantile)
 - simple English language dialog, answering questions, taking instructions
 - integration with DeSTIN for object and event recognition in images/video 
 - release of OpenCog v1.0
 - 2-year funded project from hong kong government (2011-2012)
 - Unity-based minecraft-like world built
 - proxy to unity built, 3d pathfinding
 - PLN body-control systems debugged/refactored, PLN based planner built
 - OpenPsi motivation/emotion system implemented, basically lots of stuff with the virtual world

2013-2014 - a complete, integrated proto-AGI mind, toddler level, (piagetan concrete)
 - collective cognition - multiple intelligent robots share knowledge, learn by copying
 - robot control, motion planning, testing in the BLISS robot lab (china, xiamen) with Nao platform
 - experiential language learning
 - system to let opencog instances synchronise with a central store on initialisation
 - opencog v2.0

2015-2016 - advanced learning and reasoning (piagetan formal)
 - humanoid robots outside lab in rich environments
 - using for initial eperimentaion with lab equipment (eg gene sequencers)
 - initial experimentation with mathematical theorem proving
 - all this is towards making artificial scientists and service robots and useful stuff

2017-2018 - AGI experts
 - creation of artificial scientist, in lab on its own, desiging experiments
 - service robot, household tasks, driven by English-language communication
 - virtual assistant, accompanies its user in online spaces/augmented realities - sounds like the Google Assistant

2019-2021 - full-on human level AGI
 - can't emulate humans in full detail, but does general stuff that humans do

2021-2023 - advanced self-improvement (piagetan reflexive)
 - can significantly edit its own code base
 - training in further areas of science, industry, etc.
 - can inspect it's own memory and source code, and can modify (humans can't do this to themselves, computers can)

Practical applications

OpenCog is being used in practical projects - video games, info retrieval, biomedical informatics (analysing DNA data, anlysing genetics of long-lived fruit flies - see "ais, superflies, and the path to immortality"), hedge fund / financial predictions on hong kong stock exchange.
animal childlike AGI + useful narrow AI = useful early-stage AGI


2008 supported with a grant from the Singularity University (now discontinued).
Two-year funded project from Hong Kong government (2011-2012).
Support from Novamente LLC, which is Ben's own company
In August 2011 Ben mentioned working on a proposal for 4-year funding 2012 to 2015 to various funding people
xiamen university bliss lab - Chinese government funding, presumably
- Hugo de Garis was previously here


Two-hour presentation by Ben Goertzel, August 2011:

More videos:

People involved

OpenCog developers can be broadly categorised into three groups:

Andre Senna   Google profile Linkedin profile
Commerce director at Igenesis, Brazil 
  Ari Heljakka   Google profile Linkedin profile Twitter profile Homepage
CSO at Dream Broker, Finland 
  Ben Goertzel   Google profile Linkedin profile Twitter profile Facebook profile Wikipedia page Homepage
Project leader 
Bruce Klein   Google profile Linkedin profile Twitter profile Facebook profile
VP of Business Dev, Novamente until 2010 
  Cassio Pennachin   Google profile Linkedin profile Twitter profile Homepage
Founder and CEO at Igenesis in Brazil 
  David Crane  
David Hart   Google profile Linkedin profile Twitter profile
OpenCog project management, Australia 
  David "Kizzo" Kilgore   Google profile Linkedin profile Homepage
Google Summer of Code 2009 
  Deheng Huang   Google profile Twitter profile Facebook profile
Hong Kong PolyU, Xiamen University 
Gustavo Gama   Google profile Linkedin profile
Software engineer at Igenesis in Brazil 
  Jared Wigmore   Google profile Linkedin profile Twitter profile
OpenCog Project Assistant at Hong Kong PolyU 
  Jekin Trivedi   Google profile Linkedin profile Twitter profile Facebook profile Homepage
SVM implementation in OpenCog, Mumbai 
Joel Pitt   Google profile Twitter profile Homepage
OpenCog core developer, Hong Kong 
  Linas Vepstas   Google profile Linkedin profile Facebook profile Homepage
Machine Learning R&D, Link Grammar, RelEx 
  Matt IklĂ©   Google profile Linkedin profile Homepage
Worked at Novamente 2006-2009 
Mike Ross  
Created original version of RelEx 
  Min Jiang   Linkedin profile
Professor at Xiamen University 
  Moshe Looks   Google profile Linkedin profile Homepage Homepage
Google AI researcher (previously Novamente) 
Murilo Queiroz   Google profile Linkedin profile
Engineer at Nvidia, Brazil 
  Nil Geisweiller   Google profile Linkedin profile Twitter profile Facebook profile Homepage
Developer, maintainer of CogBuntu 
  Ruiting Lian   Google profile Linkedin profile Facebook profile
Student at Xiamen University, Hong Kong PolyU 
Thiago Maia   Linkedin profile
CEO Vetta Group 
  Zhenhua Cai   Google profile
Hong Kong PolyU, Xiamen University