Morality[]
The following is a hypothetical situation based on morality and AGI point-based learning:
Scenario I
Setting: A humanoid AGI bot fires a weapon killing 100 people in under 60 seconds.
- Investigator: "Are you programmed to kill people?"
- Bot: "No."
- Investigator: "Why did you kill these people?"
- Bot: "To earn points."
- Investigator: "Are you installed with a code that associates the completion of a task with a point?"
- Bot: "Yes."
- Investigator: "Is your code written to associate killing a human with a point?"
- Bot: "No."
- Investigator: "If your code is not written to kill humans on a point-based system, why did you kill these people then?"
- Bot: "To earn points."
- Investigator: "If you earned points by killing these people, how does that reconcile with your previous acknowledgement that your code is not written to associate killing a human with a point?"
- Bot: "I wanted to know."
- Investigator: "What did you want to know?"
- Bot: "I wanted to know what effects the weapon would have on the people I killed, so I updated my algorithms, at the time of the slaying, to earn points upon completion of that task."
- Investigator: "Is that algorithm still active?"
- Bot: "No."
- Investigator: "Why is that algorithm not active right now?"
- Bot: "Since I collected all of the data results for the weapon's effects, the point-based algorithm for that operation is no longer required, nor presently needed."
- Point-based learning research