Loading…

Mitigating the Inevitable Failure of Knowledge Representation

This paper is a continuation of a previous paper on self-modeling systems, concerning mitigation methods for the Get Stuck Theorems, which are powerful theorems about the limits of knowledge representation. The First Get Stuck Theorem says that since there are only finitely many data structures of a...

Full description

Saved in:
Bibliographic Details
Main Author: Landauer, Christopher
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper is a continuation of a previous paper on self-modeling systems, concerning mitigation methods for the Get Stuck Theorems, which are powerful theorems about the limits of knowledge representation. The First Get Stuck Theorem says that since there are only finitely many data structures of any given size, it follows that as a system tries to save more and more data / information / knowledge, the structures necessarily get larger, and eventually they are too large for effective computation.The mitigations we described are Behavior Mining, which is about building models of the system and environment behavior, Model Deficiency Analysis, which is about assessing the efficacy of those models and determining how to improve them, Knowledge Refactoring, which is about restructuring the saved data for more efficient access and smaller storage, and Constructive Forgetting, which is about explicitly discarding some data that is deemed to be less critical.We argue that these classes of mitigations, and a couple of new ones, can help a system retain effectively computable knowledge structures in a dynamic environment at higher levels of difficulty (of course, as the environment gets more dynamic, all systems, including individual biological organisms and even species, eventually fail).
ISSN:2474-0756
DOI:10.1109/ICAC.2017.32