Özkural, Eray2016-02-082016-02-0820110302-9743http://hdl.handle.net/11693/28377Date of Conference: August 3-6, 2011Conference name: 4th International Conference, AGI 2011We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We introduce four synergistic update algorithms that use a Stochastic Context-Free Grammar as a guiding probability distribution of programs. The update algorithms accomplish adjusting production probabilities, re-using previous solutions, learning programming idioms and discovery of frequent subprograms. A controlled experiment with a long training sequence shows that our incremental learning approach is effective. © 2011 Springer-Verlag Berlin Heidelberg.EnglishControlled experimentGeneral IntelligenceIncremental learningLearning programmingLong term memoryMachine learning methodsStochastic context free grammarSubprogramsTraining sequencesAlgorithmsContext free grammarsLearning systemsProbability distributionsHeuristic methodsTowards heuristic algorithmic memoryConference Paper10.1007/978-3-642-22887-2_4710.1007/978-3-642-22887-2