One of his recent position papers presents some examples from psychopathology that may be valuable to look at when developing AI for interactive narratives. He argues that this is a good playground for thinking about these examples in concrete computational terms and that current architectures aren’t particularly designed for this type of play.
The first example is self-medication:
What’s interesting about self-medication is that although it is generally caused by some outside stressor, its goal is not to alleviate the stressor, so much as to regulate one’s own affective response to the stressor. If you get drunk because your spouse left you, the goal of drinking isn’t to get your spouse back, but to restore some sort of emotional equilibrium.
Although we can always add the rule to an agent’s program that says (IF NO-DATE EAT-ICE-CREAM), current architectures don’t account well for the systematicity of this general phenomenon. Not everyone will eat ice cream in response to perceived rejection, but nearly everyone will respond with some sort of self-soothing behavior.
The next example he brings up is limerence (the pattern of obsession, idealization, and fear most commonly associated with “falling in love”):
One of the most paradoxical characteristics of limerence is driven in large part by uncertainty. People don’t become limerent toward those who indicate unambiguous interest or rejection toward them, but toward those whose behavior is ambiguous or inconsistent…. Once limerence begins, perceived rejection by the beloved actually increases the amount of time spent in limerent fantasy rather than reducing it (although sustained rejection will reduce and ultimately eliminate it).
this is interesting for AI because current architectures don’t allow agents to [satisfy] goals (albeit temporarily and unsatisfactorily) through fantasy.
He notes how authoritarian personalities can predict behavioral patterns associated with group affiliation and how social rank, though this particular aspect was far less clear on my reading:
Authoritarianism also predicts certain aspects of behavior. For example, when asked to choose punishments for others’ crimes, high authoritarians will in general choose more severe punishments than low authoritarians, and report greater pleasure in administering the punishment. However, their assignment of punishment will depend on the identity of the perpetrator; an accountant who started a fight with a “hippie panhandler” will be given less of a punishment than if the subject is told the hippie started the the fight with the accountant.
Modeling these traits computationally requires build[ing] reasoning systems in which (1) reasoning processes depend on the social status and affiliation of those being reasoned about, yet (2) the system itself is unaware of such dependencies.
Don’t really appreciate the reasoning for item (2) up there…. I also feel that although this is a position paper on perhaps a more esoteric subject, I got little context about what the current AI architectures look like, or at the very least, a few more concrete details on how they’re inappropriate for this type of thinking. Nonetheless, I think his point is that these things are important to start exploring at a computational level when considering AI models for narratives…interesting.
He also has recently posted an essay, “What is computation” that definitely seems worth a glance when I get the chance…it is written at an introductory level and should therefore be accessible to anyone interested.