**Why it is Impossible to Program a General AI Using Conventional Methods**

Assume a system that uses instructions (rules or sets of rules) to transform inputs into outputs.

An infinite number of transformations are possible.

No instruction — however comprehensive — can incorporate all possible transformations, purely because some of them are mutually contradictory. The instruction “save input A” is mutually incompatible with the instruction “delete input A”, for example.

Because no instruction can be global, every instruction added to the system also adds at least one prohibition. The instruction “do A” forbids the system from not doing A. In practice, the addition of a new instruction will generally create more than one additional prohibition. “Add 2 to input A”, for example, forbids the system from adding 3 or 4 to input A, as well as deleting or ignoring it, converting it to a different data format, etc.

Thus, a system with *n* instructions will also contain *n + x* prohibitions, where x is an integer equal to or greater than zero.

This means that the number of things that the system cannot do will always be greater than or equal to the number of things that it can do.

When dealing with finite and predictable sets of inputs and/or outputs, this is of no concern, since their finite and predictable nature limits the number and nature of the transformations required of the system.

General AI, however, must be able to deal with infinite and unpredictable input and output sets. Therefore, it must retain the capacity to perform all (or close to all) possible transformations.

This means that a general AI cannot be programmed through the establishment of rule-based instructions, since every instruction added to enable it to carry out a particular transformation will forbid it from carrying out one or more others. To avoid or mitigate the effects of these prohibitions, one or more exceptions must be written, which — being themselves instructions — create new prohibitions of their own.

In other words, the closer programmers approach the levels of complexity required for general AI, the more complex the problem becomes.

To avoid this problem, it is necessary to have recourse to iterative or fractal systems, via which an infinitely complex set of subsystems may be created through a simple initial instruction. Under such a system, the initial instruction generates a set of subsystems of potentially infinite variety. While these subsystems have instructions and (therefore) prohibitions of their own, there is no mathematical interference between the instructions and prohibitions of each system. Instructing System 1 to perform transformation A and thereby forbidding it from performing transformation B has no effect on System 2’s ability or inability to perform transformation B. Thus, the parent system retains its initial level of Kolmogorov simplicity. Because it is only ever subject to one instruction — the sub-system generation instruction — the number of prohibitions to which it is subject does not increase.

In theory, using this method, it is possible to create a complex system with just one global prohibition. We demonstrate how this could be done here.

Please note that this is an annex to a design presented here.

To contact the authors, please email jen@lexikat.com, or visit lexikat.biz.