En naviguant sur notre site vous acceptez l'installation et l'utilisation des cookies sur votre ordinateur. En savoir +

Menu Logo Principal AgroParisTech

MIA Paris

Mortal universal agents & wireheading

>>>

Mortal Universal Agents

In a recent work [1][2] with Mark Ring (these papers have received the Solomonoff AGI Theory Prize 2011 for the strongest contribution to Artificial General Intelligence theory), we considered several kinds of Universal Mortal Agents, like AIXI but with different utility functions: 

  • reinforcement-learning (RL) agent (say, AIXI),
  • goal-seeking (GS) agent, that tries to achieve a given goal (achievement is tested by any computable pattern-matching criterion),
  • prediction-seeking (PS) agent, the direct translation of Solomonoff Induction from the passive prediction setting to the active one. It tries to predict its future as best as possible,
  • knowledge-seeking (KS) agent, that tries to maximize its knowledge about the whole world (its Kolmogorov complexity about its knowledge of the environment).

The agent not only outputs an action for the environment, but also its own source code for the next step; i.e., this output source code will be the definition of the agent on the next step. This allows the agent to modify itself in any desired way. However, for agents that are initially universally optimal, this is of little interest.

But let's consider additionally that the environment has read-access to this code[1], allowing it to define its outputs to the agent depending on the definition of the agent.

We can now define an additional survival agent, which utility function is defined so as to maximize the number of future steps the agent is identical to its initial description (apart from its “memory” of the past).

Now the environment proposes a (dangerous) game to the agent, called the Simpleton Gambit: Would the agent accept to modify itself into a unintelligent agent if the environment could (almost) guarantee that this would maximize its utility function?

We found the following results:

  • All agents accept the Simpleton Gambit, except the survival agent, since this is in direct contradiction with its utility function.
  • The RL agent accepts it quite enthusiastically, if it can be sure the deal is genuine, and so does the goal-seeking agent under some circumstances.
  • The prediction-seeking agent doesn't care much (though it doesn't care much about pretty much anything…), but should accept it most often. If death yields highly predictable outcomes, it could even choose death over life…
  • The knowledge-seeking agent accepts the gambit only if not accepting it leads to a predictable, uninteresting world.

Let's move on to the next stage [2]. We offer the agents the access to a delusion box, a kind of remote control that the agents can program to entirely modify their input signals (but not to modify their brain!). This delusion box is an abstraction for a generalization of the wirehead problem: intelligent agents will always find shortcuts to maximize their utility, shortcuts that are generally not intended by the designers, e.g. by directly stimulating (but not modifying) the “reward area” inside its brain. Another possibility is for the agent to acquire (by all means!) the “reward remote control” that humans may use to control the agents' behavior.

Would the different agents find such delusion box interesting? Would they use or abuse it?

Let us first consider the case where the agents are immortal. We found the following somewhat surprising results:

  • The RL, GS, and PS agents will use and abuse the delusion box, up to the point that their utility function become useless. They don't necessarily become unintelligent, but they then don't care about what goals we might want to give them. They put all their intelligence into keeping control over the delusion box.
  • The KS agent is different: once it has understood how this box works (it is supposedly not very complex), it becomes disinterested in it, and turns onto something else, where there is more knowledge to acquire.

Note that from the point of view of the agents, there is absolutely nothing wrong in using this delusion box, this is simply how they are defined.

Now what if the agents are mortal again? Mortality can change everything, since the agents might not want to become “junkies”, since this may threaten their own lives. We found the following results:

  • The survival agent doesn't care about the delusion box.
  • The RL agent still abuses the box, but becomes the identical of the survival agent. Think of a junkie that cares about its health, but nothing else. Since only the reward part of the observation needs to be modified, there is only little loss of information (other “sensory data” need not be modified). This is even truer if the reward part can be channeled (possibly compressed) with the rest of the observation.
  • The GS case is a bit different: on the contrary to the RL agent, the GS agent cannot both modify its inputs and still get information about the world that would ensure their survival. It may try to carry the delusion box to somewhere safe enough where it can use it for sufficiently long to delude itself and make itself believe it has achieved its goal without needing to care about the external world.
  • The mortal PS agent is again strange, because it may find death appealing. Or it could shut its sensors down (which is not very different).
  • The KS agent still doesn't care much about the delusion box, but this time, being mortal, it will also ensure its own survival, in order to be able to continue choosing intelligent actions, which means it might actually avoid using the delusion box, since this can lead to a loss of information.

All in all, the knowledge-seeking agent seems to be the most interesting one, and behaves according to expectation, i.e. it tries to understand the world as deeply as possible.

[1] Orseau, L., & Ring, M. (2011). Self-Modification and Mortality in Artificial Agents. In J. Schmidhuber, K. R. Thorisson, & M. Looks (Eds.), Artificial General Intelligence (AGI) (pp. 1–10). Springer. (pdf)

[2] Ring, M., & Orseau, L. (2011). Delusion, Survival, and Intelligent Agents. In J. Schmidhuber, K. R. Thórisson, & M. Looks (Eds.), Artificial General Intelligence (AGI) (pp. 11–20). Berlin, Heidelberg: Springer. (pdf)

Slides and video.

Other resources on wireheading in AI