Skip to main content

Relating Event Calculus and Linear Logic, Part 2

Before I begin the technical content of this post, just a quick update: I gave my talk on "Generative Story Worlds as Linear Logic Programs" at INT in Milwaukee on the 17-18th, and I kept a travelogue, which includes notes from all the talks! The travelogue also details my time at AdaCamp, an "unconference" event on feminism & open technology, which I would strongly recommend to any women interested in those topics. I had a great time meeting people at both of these events!

--

Where I last left off in my discussion of the event calculus, I was discussing the idea of "encoding" vs. "embedding." In some discussions, these terms map onto the alternate terms "deep embedding" and "shallow embedding." If I understand them correctly, Alexiev's program gives a deep embedding of the event calculus: the core constructs of the calculus, fluents and events, are modeled as terms in the underlying logic program.

I was curious what it would mean to give a (shallow) embedding of the event calculus in linear logic, or indeed if this were even a possible task. On one level, EC and LL have entirely different purposes: the event calculus enables introspection on event specifications to determine intervals, and linear logic enables deduction that models reachability between state configurations.

On another level, EC and LL "do the same thing," i.e. model the same type of domain. They model evolving state, and they enable reasoning about propositions that hold in some configurations but not in others.

So the shallowest possible embedding seems like it would be one that populates a signature S for which S; D |- A iff D @ T |-_{EC} A @ T' for T < T'. In other words, the preorder given by logical entailment in linear logic should map to the preorder given by time in the event calculus.

To achieve this mapping, we would model
- fluents (time-indexed propositions) as linear propositions (or types)
- events as linear logic rules

An example domain (the same Yale Shooting Problem as from the previous post) is given below.

%% first take: fluents as types; events as rules.
%% "terminates E Ts" & "initiates E Is" |--> ruleE : Ts -o {Is}.

% fluents as types.
loaded : type.
empty : type.
dead : type.
alive : type.

% events as rules.
load : empty -o {loaded}.
unload : loaded -o {empty}.
shoot : loaded * alive -o {empty * dead}.
wait : {1}.

#trace * {empty * alive}.

This encoding does in some sense "simulate" the domain, but what it fails to take account for is control over "when things happen." The programmer of the domain has no means by which to dictate which rules proof search will select in which order, so unfortunately we cannot give an adequate encoding in this scheme.

We could try to fix it by explicitly modeling a happens E predicate and using it as a guard on our rules, but without explicitly modeling time as well, we still cannot control the order in which events take place.

Thus, take 2, in which events are data and fluents are boolean-indexed types. This version of the code uses a clock that gets incremented after each event "happens".

time : type.
z : time.
s : time -> time.
infty : time.

happens : event -> time -> type.
clock : time -> type.

bool : type.
tt : bool.
ff : bool.
loaded : bool -> type.
dead : bool -> type.


happens/load : clock T * happens load T * loaded ff  
                   -o {loaded tt * clock (s T)}.
happens/unload : clock T * happens unload T * loaded tt  
                   -o {loaded ff * clock (s T)}.
happens/shoot : clock T * happens shoot T * loaded tt * dead ff  
                    -o {clock (s T) * loaded ff * dead tt}.
happens/wait : clock T * happens wait T -o {clock (s T)}.

init : type.
- : init -o {loaded ff * dead ff}.

#trace * {init
* clock z * happens wait z * happens wait (s (s z)) * happens shoot

(s (s (s z))) * happens load (s z)}.

Results: this correctly models and simulates the domain, including control over event occurrences. But it fails a basic property of the event calculus: filling in the "gaps" between events logically by way of inertia. In this setup, we have to manually specify what happens at each and every timestep.

In the final take, I give a program that maintains events in a queue, which is centrally managed by a single rule:

happens : event -> time -> type.
holds : fluent -> type.

elist : type.
nil : elist.
cons : event -> time -> elist -> elist.

queue : elist -> type.

% domain things
load : event.
unload : event.
wait : event.
shoot : event.

loaded : fluent.
dead : fluent.

% clock management
tick : clock T -o {clock (s T)}.
stop : queue nil * clock _ -o {1}.

% key rule: "happens" management
happen : clock T * queue (cons E T' Es) * leq T' T
             -o {queue Es * happens E T}.

% action cause/effects
happens/load : happens load T -o {holds loaded * clock T}.
happens/unload : happens unload T * holds loaded -o {clock T}.
happens/shoot : happens shoot T * holds loaded -o {holds dead * clock T}.
happens/wait : happens wait T -o {clock T}.

init : type.
- : init -o {clock z
* queue (cons load z (cons wait (s (s z)) (cons shoot (s (s (s z))) nil)))
}.

#trace * {init}.

This program more or less gives me what I was initially after: a forward-chaining program that simulates a sequence of events specified in event calculus terms, filling in the gaps logically. It's still not *quite* an adequate encoding, because we don't have control over when the tick rule fires, so it might happen any time -- note that the happen rule invokes "less than or equal" to catch the cases where we've incremented past the next event's scheduled time. But we're able to keep them in the correct order nonetheless, via the queue as well as the synchronization technique of revoking the clock token until after the event's effects have been incurred.

After carrying out this iterative design experiment, I conclude that the distinction I should have made previously was not between encoding and embedding, but rather between a program analysis and execution. And perhaps that's what I've learned about the event calculus in the process: it isn't really for writing specifications we can run, just ones we can analyze. My proofs-as-programs training led me to faulty expectations.

Comments

  1. I'm being a bit skimmy here, but I guess on one level the domain of the event calculus isn't quite as obvious as might initially appear? So we don't get what we expect by 'running' it, and in particular we don't get a neat schedule out without further work?

    ReplyDelete

Post a Comment

Popular posts from this blog

Using Twine for Games Research (Part II)

This preliminary discussion introduced my thoughts on using Twine as a tool for creating prototypes for games research. I'll start with documenting my first case study: a hack-and-slash RPG-like setting where the player character has a record of attributes ("stats") that evolve through actions that turn certain resources (money, health, items) into others. I've selected this hack-and-slash example because it falls outside the canonical "branching story" domain thought to be Twine's primary use case, but it is not too much trickier to implement. It relies crucially on the management of state in ways that simple branching stories would not, but it does so in a fairly straightforward way.

If all goes well, this post may also serve as a tutorial on the "basics" of Twine (links + variables + expressions). In particular, I'll be using Twine 2/Harlowe, and I haven't seen many tutorials for this new version published yet.

To me, the main "…

Why I don't like the term "AI"

Content note: I replicate some ableist language in this post for the sake of calling it out as ableist.

In games research, some people take pains to distinguish artificial intelligence from computational intelligence (Wikipedia summary), with the primary issue being that AI cares more about replicating human behavior, while CI is "human-behavior-inspired" approaches to solving concrete problems. I don't strongly identify with one of these sub-areas more than the other; the extent to which I hold an opinion is mainly that I find the distinction a bit silly, given that the practical effects seem mainly to be that there are two conferences (CIG and AIIDE) that attract the same people, and a journal (TCIAIG - Transactions on Computational Intelligence and Artificial Intelligence in Games) that seems to resolve the problem by replacing instances of "AI" with "CI/AI."

I have a vague, un-citeable memory of hearing another argument from people who dislike the…

Using Twine for Games Research (Part III)

Where we last left off, I described Twine's basic capabilities and illustrated how to use them in Twine 2 by way of a tiny hack-and-slash RPG mechanic. You can play the result, and you should also be able to download that HTML file and use Twine 2's "import file" mechanism to load the editable source code/passage layout.

Notice that, in terms of game design, it's not much more sophisticated than a slot machine: the only interesting decision we've incorporated is for the player to determine when to stop pushing her luck with repeated adventures and go home with the current spoils.

What makes this type of RPG strategy more interesting to me is the sorts of decisions that can have longer-term effects, the ones where you spend an accumulation of resources on one of several things that might have a substantial payoff down the road. In a more character-based setting, this could be something like increasing skill levels or adding personality traits.

Often, the game-…