## Tuesday, April 30, 2013

### Paper draft: Linear Logic Programming for Narrative Generation

Gwenn Bosser, João Ferreira, and I have just submitted a paper to LPNMR '13!

Linear Logic Programming for Narrative Generation

Abstract:
In this paper, we explore the use of Linear Logic programming for story generation. We use the language Celf to represent narrative knowledge, and its own querying mechanism to generate story instances, through a number of proof terms. Each proof term obtained is used, through a resource-ﬂow analysis, to build a directed graph where nodes are narrative actions and edges represent inferred causality relationships. Such graphs represent narrative plots structured by narrative causality. Building on previous work evidencing the suitability of Linear Logic as a conceptual model of action and change for narratives, we explore the conditions under which these representations can be operationalized through Linear Logic Programming techniques. This approach is a candidate technique for narrative generation which uniﬁes declarative representations and generation via query and deduction mechanisms.

## Monday, March 25, 2013

### Modeling gameplay in Celf, Part 3

(This is another iteration of the example I developed in Part 1 and Part 2, but barring incrementally understanding the code, I think this post is relatively self-contained. Celf-contained, if you will.)

When I took a simple choice-based ("CYOA") game with a few bits of inventorial state and tried to add handles onto the rules so as to specify a specific sequence of player choices, something interesting happened: I had to make new decisions about which parts of the game the player could control, and how. For instance, whether they win or get eaten by a grue depends on a prior choice to take the lamp from the den or not; they cannot control their fate after that point. This makes clear that "getting the lamp" and "opening the door" are player-facing game controls, whereas "getting eaten by a grue" is a choice made by the game. We wound up enumerating those actions as follows.

'start : action.
'opendoor : action.
'getlamp : action.
'getkey : action.
'starttoden : action.
'starttocellar : action.
'dentocellar : action.
'cellartodoor : action.
'cellartoden : action.

It's tempting, then, to give the player a generalized, combinatorial command language, rather than a finite set of available actions, like so:

'startat : room -> action.
'open : object -> action.
'get : object -> action.
'moveto : room -> action.

For this version (which includes a few other small syntactic changes) the game rules look like this:

start_to_den    : cur_act ('startat den) * at_start -o {at_den * tick}.
start_to_cellar : cur_act ('startat cellar) * at_start -o {at_cellar * tick}.

den_to_cellar : at_den * cur_act ('moveto cellar) -o {at_cellar * tick}.
den_to_lamp   : at_den * cur_act ('get key) * ~got key -o {at_key}.
den_to_key    : at_den * cur_act ('get lamp) * ~got lamp -o {at_lamp}.
get_key       : at_key -o {got key * at_den * tick}.
get_lamp      : at_lamp -o {got lamp * at_den * tick}.

cellar_to_den  : at_cellar * cur_act ('moveto den) -o {at_den * tick}.
cellar_to_door : at_cellar * cur_act ('open door) -o {at_door}.

open_door_without_key : at_door * ~got key -o {at_cellar * ~got key * tick}.
open_door_with_key    : at_door * got key -o {at_dark}.

dark_with_lamp    : at_dark * got lamp -o {at_win}.
dark_without_lamp : at_dark * ~got lamp -o {at_lose}.

Then I started to wonder if I could recover the "fuzz testing" abilities from the original, branching-choice version of the game: could I still use Celf's logic programming engine to randomly "play" the game?

So I replaced this rule, which pulls a next action from a sequential table

next_act : tick * cur N * nth_act N A -o {cur_act A * cur (s N)}.

with this one:

player : tick * cur N -o {cur (s N) * (Pi a:action.cur_act a)}.

...and wasn't optimistic. The Pi a:action part within the forward-chaining monad generates a template cur_act in the context that can be instantiated with any action. Naïvely, what I thought would happen is that forward chaining would instantiate cur_act at non-applicable actions all over the place, meaning that queries on end states would most of the time fail (the game would reach stuck states).

Thinking about this more, in terms of focusing behavior and by analogy with A -> B, a rule generating Pi x:A.B ought to keep the Pi in focus, forcing a choice of A (e.g. action). But since the proposition in question is actually a type, and depends upon the particular derivation of it, I suspect (as suggested, but glossed over, in Frank's course notes) that it's generating a fresh unification variable that will remain unresolved until further constraints are introduced. In this sense, it sort of gives Pi a more positive character than ->.

The upshot is that my #query * * * 50 init -o {report END NSTEPS} generates 50 pretty solutions, some winning and some losing, a shorter example of which looks something like this:

Solution: \X1. {
let {[X2, [X3, [X4, [X5, X6]]]]} = X1 in
let {[X7, X8]} = player [X6, X5] in
let {[X9, X10]} = start_to_den [X8 !('startat !den), X4] in
let {[X11, X12]} = player [X10, X7] in
let {X13} = den_to_lamp [X9, [X12 !('get !key), X2]] in
let {[X14, [X15, X16]]} = get_key X13 in
let {[X17, X18]} = player [X16, X11] in
let {X19} = den_to_key [X15, [X18 !('get !lamp), X3]] in
let {[X20, [X21, X22]]} = get_lamp X19 in
let {[X23, X24]} = player [X22, X17] in
let {[X25, X26]} = den_to_cellar [X21, X24 !('moveto !cellar)] in
let {[X27, X28]} = player [X26, X23] in
let {X29} = cellar_to_door [X25, X28 !('open !door)] in
let {X30} = open_door_with_key [X29, X14] in
let {X31} = dark_with_lamp [X30, X20] in
let {X32} = report_win [X31, X27] in X32}
#END = w
#NSTEPS = s !(s !(s !(s !(s !z))))

...which demonstrates (thanks to the ' syntactic markers) how the "player AI" chose to instantiate the universal quantification in a goal-directed way to satisfy the rule, with exactly the random-but-constrained character as before.

So I think that's pretty neat.

(Code here.)

## Wednesday, March 20, 2013

### Modeling gameplay in Celf, Part 2: Simulating Interactivity

Where I last left off, I gave a toy example of using Celf to specify a game's causal structure, summarized by the following ruleset:
start_to_den_or_cellar : start -o {den & cellar}.
den_to_cellar_lamp_or_key : den -o
{cellar
& (nolamp -o {getlamp})
& (nokey -o {getkey})}.
get_lamp : getlamp -o {gotlamp * den}.
get_key : getkey -o {gotkey * den}.
cellar_to_den_or_door : cellar -o {den & opendoor}.
open_door_without_key : opendoor * nokey -o {cellar * nokey}.
open_door_with_key : opendoor * gotkey -o {dark}.
dark_with_lamp : dark * gotlamp -o {win}.
dark_without_lamp : dark * nolamp -o {lose}.
init : type = {nokey * nolamp * start}.
As presented, this is effectively "half a game", with no delineation between player choice and game logic -- a fact that allowed us to randomly sample the space of all possible play sequences by querying any final gameplay state -- but that's ultimately uninformative about interactivity.

The key idea of the transformation I'll outline in this post is that we can delineate the boundary between game and player using two atoms treated as "signals", one corresponding to a choice or query from the player (think "event handling") and one corresponding to the end of the game's internal computation, returning control to the player (or, in a more complex setting, to a renderer or printing mechanism).

Each rule that handles a player control will have an extra guard premise cur_act(A) for an action A corresponding to that rule, and each rule that returns control to the player will simply issue a tick, which will be handled by a rule serving as proxy between game and player.

(Edit: I don't know why the formatting insists on being terrible for the rest of the code here. The final, complete text can be referred to in twine-interact.clf on Github.)

As a first pass, we need to disentangle all of the &'d-together choices and present them as separate rules (each of which will get its own guard).

start_to_den : start -o {den}.


start_to_cellar : start -o {cellar}.


den_to_cellar : den -o {cellar}.

den_to_lamp : den * nolamp -o {getlamp}.

den_to_key : den * nokey -o {getkey}.

get_lamp : getlamp -o {gotlamp * den}.

get_key : getkey -o {gotkey * den}.

cellar_to_den : cellar -o {den}.

cellar_to_door : cellar -o {opendoor}.

open_door_without_key : opendoor * nokey -o {cellar * nokey}.

open_door_with_key : opendoor * gotkey -o {dark}.

dark_with_lamp : dark * gotlamp -o {win}.

dark_without_lamp : dark * nolamp -o {lose}.

Next, we'll specify the allowable player actions. (The ' symbol is just an identifier convention I'm using to distinguish actions; it has no special meaning in Celf.) The rules in green have been modified to only fire when their corresponding action is current. Omitting added type declarations, this is all that's changed:

start_to_den : cur_act 'starttoden * start -o {den}.

start_to_cellar : cur_act 'starttocellar * start -o {cellar}.

den_to_cellar : cur_act 'dentocellar * den -o {cellar}.

den_to_lamp : cur_act 'getlamp *  den * nolamp -o {getlamp}.

den_to_key : cur_act 'getkey * den * nokey -o {getkey}.

get_lamp : getlamp -o {gotlamp * den}.

get_key : getkey -o {gotkey * den}.

cellar_to_den : cur_act 'cellartoden * cellar -o {den}.

cellar_to_door : cur_act 'cellartodoor * cellar -o {opendoor}.

open_door_without_key : opendoor * nokey -o {cellar * nokey}.

open_door_with_key : opendoor * gotkey -o {dark}.

dark_with_lamp : dark * gotlamp -o {win}.

dark_without_lamp : dark * nolamp -o {lose}.

The job of cur_act is to serve as a sort of "time-varying value" determined by a sequence of actions that can be specified separately. We can encode that sequence as a relation between a natural number N and an action A, saying that the Nth action is A. Here's an example of an action sequence:


% 'starttoden, 'getlamp, 'getkey, 'dentocellar, 'cellartodoor.

act0 : nth_act z 'starttoden.

act1 : nth_act (s z) 'getlamp.

act2 : nth_act (s (s z)) 'getkey.

act3 : nth_act (s (s (s z))) 'dentocellar.

act4 : nth_act (s (s (s (s z)))) 'cellartodoor.

Now we need to connect cur_act to this table. Let's try to be modular by putting it in a single rule:
next_act : cur N * nth_act N A -o {cur_act A * cur (s N)}.
However, this approach poses a problem -- it's not keeping our code synchronized the way we intend. Imagine initializing the context with cur z -- this rule could fire until we'd loaded every action into the context without actually running the game code!

My solution is to add another guard premise. While cur_act passes control from player to game, this one, which I call tick should just pass it back in the other direction, where the next_act rule serves as a proxy for the player.
next_act : tick * cur N * nth_act N A -o {cur_act A * cur (s N)}.

Now we need to do one more global pass on the game logic, issuing a tick at the end of every control sequence. For movement rules, this is just immediately after receiving the action. For rules that involve more complicated checks and have multiple cases (like open_door), this occurs at the end of each branch:

start_to_den : cur_act 'starttoden * start -o {den * tick}.

start_to_cellar : cur_act 'starttocellar * start -o {cellar * tick}.

den_to_cellar : cur_act 'dentocellar * den -o {cellar * tick}.

den_to_lamp : cur_act 'getlamp * den * nolamp -o {getlamp}.

den_to_key : cur_act 'getkey * den * nokey -o {getkey}.

get_lamp : getlamp -o {gotlamp * den * tick}.

get_key : getkey -o {gotkey * den * tick}.

cellar_to_den : cur_act 'cellartoden * cellar -o {den * tick}.

cellar_to_door : cur_act 'cellartodoor * cellar -o {opendoor}.

open_door_without_key : opendoor * nokey -o {cellar * nokey * tick}.

open_door_with_key : opendoor * gotkey -o {dark}.

dark_with_lamp : dark * gotlamp -o {win}.

dark_without_lamp : dark * nolamp -o {lose}.


For fun, we can load up our reporting mechanism with the number of steps we took, then load up the initial context and do a query:


report_win : win * cur N -o {report w N}.

report_loss : lose * cur N -o {report l N}.

init : type = {nokey * nolamp * start * cur z * tick}.

#query * * * 1 init -o {report END NSTEPS}.


Yielding the solution

Solution: \X1. {

let {[X2, [X3, [X4, [X5, X6]]]]} = X1 in

let {[X7, X8]} = next_act [X6, [X5, act0]] in

let {[X9, X10]} = start_to_den [X7, X4] in

let {[X11, X12]} = next_act [X10, [X8, act1]] in

let {X13} = den_to_lamp [X11, [X9, X3]] in

let {[X14, [X15, X16]]} = get_lamp X13 in

let {[X17, X18]} = next_act [X16, [X12, act2]] in

let {X19} = den_to_key [X17, [X15, X2]] in

let {[X20, [X21, X22]]} = get_key X19 in

let {[X23, X24]} = next_act [X22, [X18, act3]] in

let {[X25, X26]} = den_to_cellar [X23, X21] in

let {[X27, X28]} = next_act [X26, [X24, act4]] in

let {X29} = cellar_to_door [X27, X25] in

let {X30} = open_door_with_key [X29, X20] in

let {X31} = dark_with_lamp [X30, X14] in

let {X32} = report_win [X31, X28] in X32}

#END = w

#NSTEPS = s !(s !(s !(s !(s !z))))


The point of all this was (1) to demonstrate how to simulate tightly-controlled interaction in a parametric way (while this was a global transformation of the original program, the only part that depended on the specific action sequence was the table representing that sequence); (2) to show that the very specific, structural use of the "control-passing" atomic formulae suggests, as with "design patterns", a possible useful extension to the language -- specifically, one that lets us express the programmer intent of separate modules or processes, and lets us check that that intent is preserved.

## Sunday, March 17, 2013

### How to create the PL culture I'd like to believe we deserve

I herein interrupt my (ir)regular schedule to post about something sociological rather than metalogical.

I considered relegating this content to my personal blog, but honestly I think these words need to fall on the ears of exactly the folks who are mired enough in the technical community to follow research blogs. This is a post about "PL culture". What I mean by that is the characteristics, defined by the perceptions of its constituents, of "the PL community" -- or any subset of active programming languages researchers who find themselves in each others' company, perhaps at a conference or within a research group at a particular university.

In particular, I want to understand how PL might be failing as a community, and by that I mean either (1) fostering attitudes that make people within it feel othered/alienated/estranged or (2) serving as a barrier to the potential people and ideas it could include but doesn't. I want to cast some attention on the experiences of anyone who might ever sit around a table with a research group or stand around in a hallway at a conference and think, I'm never going to succeed in this community, because everyone here is so much more ____ than me.

On one level, this is probably recognizable to almost everyone at some point in their career: fill in the blank with "intelligent", "prolific", "knowledgeable", and you've got textbook impostor syndrome. What they don't tell you about impostor syndrome is that even if you're not an impostor in terms of your ability to work hard and think cleverly, you really might not belong in the sense that the prevalent culture is working against you, is engineered for the success of people who want different things than you or whose backgrounds experiences more closely match those around them.

Let me take a moment to be clear where I'm coming from: I consider PL my home, my academic family that raised me (is still very much raising me) as a computer scientist and thinker. I've just been part of CMU's graduate recruiting process for the 5th year in a row, meeting with potential PL grad students and talking to them about what it's like to be one. Because of this regularly-scheduled checkpoint in which I describe my experiences out loud, I've been able to listen to my own sentiments evolve over time. And a fact that has stayed true that entire time is that PL is my home.

So to realize slowly, over the years, that there are a number of ways I feel dissonant with that culture entails a bit of a heartbreak. It feels necessary in that to differentiate myself from a collective mindshare, of course I'm going to feel, well, different, but what worries me is that the cultural memes within academic PL are not on my side in terms of continuing its support and inclusion of me in light of that difference.

And I've discovered that I'm not alone. I asked twitter in what ways they felt PL culture could be alienating, and got a number of really interesting responses, many quite different from mine. Sadly, several people who shared with me (privately) their similar feelings wound up departing from academia, PL, or both. I found some disturbing patterns especially in the treatment that women (trans and cis) and gender-noncomforming students encountered, in terms of "accidental" estrangement and worse. But even those not visibly different reported feeling invisible or excluded due to abnormal background, research interests, or argumentation habits.

I'm going to assume we all agree that this is a problem, first of all. I know that isn't the case, but feel free to check out now if you disagree with, for example, the gender imbalance in the field being a problem, because I'm not here to make a concerted argument on that point. All I will say is this: I want anyone eager to learn about and do PL to be pulled in and educated and embraced and listened to so that ultimately, they productively crank out awesome research that we can all learn from, and then we all crank out even more awesome research. End of story. If you're on board with that, read on.

Of course, it's significantly easier to say "yes, we want to include everyone!" than it is to actually be a fertile community for them to not only feel welcome but to thrive and lead. In light of that, I want to try to present some concrete advice that you can apply whenever socializing and collaborating with other PL enthusiasts or potential PL enthusiasts.

### How to create the community we all deserve

1. Don't dismiss non-STEM fields out of hand. Computer science has a pop-cultural reputation of being Only For Extra Smart People who clearly spend the largest portion of their brain cycles thinking about intensely difficult problems and thus have the most superior brains. This not only winds up setting an ego bar for anyone considering entering the field (if they don't already see themselves as a whizz kid, why should they try to hang out with them?), it's also dangerously myopic. I'm a huge fan of Off the Beaten Track as an attempt to try to cross disciplines with ours, but I have a hard time seeing it as a success so far, with a) very few crossed-with disciplines falling outside "other subfields of CS" or occasionally hard sciences and b) not much influence yet on mainline PL research. How are we going to grow our ideas if we can't communicate and cross-germinate them with external ones? When is the criterion of a language's expressiveness going to go beyond its ability to implement similar languages?

2. Inclusive language, y'all. This is a dead horse that keeps rising from the grave and definitely not unique to (or the worst in) PL, but it bears repeating. In formal settings like conferences I feel like we usually do well, but once the tone gets more "casual" like in ad-hoc social communities that grow around research groups, it can get pretty frustrating. Every male-as-default pronoun/name/hypothetical character is a grunch, i.e. a split-second solar-plexus-hitting reminder that a woman is Not The Assumed Audience. As Lindsey pointed out wrt the Haskell Symposium incident linked above:
You know what hurts about that sentence? The word 'they' and the word “us”. Usually, I like to think that when PL people speak of “us”, that I’m included in that “us”. But apparently there are PL people for whom “us” doesn’t mean “PL people”, but rather, “PL guys”.
This of course goes for heteronormative, racial and ableist assumptions that tend to creep into our language and hypothetical examples as well if we're not extra vigilant (to which point 5 below is also relevant).

3. Curb the interruption & other forms of one-upmanship. This is tough sometimes because we're all just so excitable and want to jump in right away with whatever we're thinking exactly when we think it. But the degree to which it happens sometimes makes me think that it's a lack of caring more than a lack of trying, or maybe even just ignorance to the fact that it's an issue. Consider whether you have a habit of interpreting a midsentence pause-to-collect-thoughts as a cue to make your point more loudly, and consider making a concerted effort to change that behavior.

4. Understand and discuss atypical brain function. One way to put this point is: stop valuing your colleagues on the basis of how "smart" you think they are, through e.g. how quickly they can solve a problem you put forth or how long it takes them to grasp a point from a paper or talk. Another thing I'm saying with this is that depression in academia is super common, yet we never talk about it; compounding situations like PTSD are less common yet can be totally crippling in combination with depression and the concomitant taboo/lack of sympathy for anyone who's not at least high-functioning with their atypicality. In fact perhaps we just expect that everyone in academia is "a little bit crazy", which means that a) we have some uniform idea of what that means and how it affects everyone (everyone responds to stress with workaholism, right?) and b) we don't talk about it at all, or what we could be doing to help each other, because we just think it's an inevitable part of the ride.

5. Recognize nonnormative family structure & other aspects of life outside academia. Basically, this is the idea that not everyone has followed-so-far and plans to continue following the Default Life Script. This article on geek/programmer culture, which has a similar aim for the software industry that I do in this post for PL, touches on this in one of its conversational suggestions to "talk about topics that are unique or important to you". Putting that forth as unilateral advice seems to me to ignore that it constitutes a risk some can't afford, but I do think that particularly if you're at a stable point in your career where you can afford to take social risks like that, showing folks with similar stories who're at more vulnerable/formative stages could be hugely helpful for them, knowing that Someone Like Them actually made it, wound up in a career that perhaps they could see themselves pursuing. Or even just that they have a potential ally to talk to if anyone gives them crap.

6. Get over the idea that studying logic makes you immune to fallacious or incomplete arguments. Valid argumentation requires not only clarity of thought but also empathy, humility, and introspection to overcome your biases and allow new information to influence your beliefs. We can't have productive discourse by coming to our conclusions alone, nor by assuming that whatever dogmatic statements we make (e.g. about specific programming idioms being more "natural") lie above critical examination.

7. Give positive feedback. This point arises from a specific suggestion from Chung-chieh Shan:
[T]hank people more often: asking a good question, engaging in discussion that (even if contentious) helped you clarify your stance and work, kicking a dinner group into action -- such helpful actions take work, we should express our (professionally relevant) gratitude for them, and I think it would help explain to many of us why we belong.
Another example of often-thankless activity I thought of is writing down an expository piece of "research folklore" previously conveyed by oral tradition; I've frequently wished we had better venues for solutions to open exposition problems.

Aside from thanking people, if you advise or mentor students much earlier in their career than you, positive feedback can also come in the form of remembering to say out loud what you appreciate about their work. Whatever is obvious to you about their talents and strengths is very likely not obvious to them.

These are all things that you-an-individual can do; there are lots of broader systemic points, such as the cost of conferences and the fact that barrier-to-entry for someone whose path diverged from academia several years ago has little hope of making their way - and I'd like to discuss what we can do about that as well, but I think one could easily write a separate, less-PL-specific post on those matters.

I'm saying all these things because honestly, I love being in PL and I don't want to leave. I want to be happy while I'm here, I want to make my fellow colleagues happy, and I want to show folks who don't yet know what we're about that we're lovely people who think about interesting problems.

If you have more suggestions for this list, please feel free to submit them in comments.

Thanks especially to Chung-chieh Shan, Wm. Caylee Hogg, Philippa Cowderoy, and Lindsey Kuper, who contributed a lot of really important points to this discussion as well as helpful feedback to a first draft.

## Wednesday, March 13, 2013

### Modeling gameplay in Celf, Part 1: Fuzz Testing

I'm hereby resolving to stop biting off more than I can chew wrt blog posts - I've got two behemoths in drafts over just the past few days and nothing to show for them (yet). So right now I'm going to try to carefully explain a tiny example that I just worked out over the past half hour and I think nicely encapsulates a few ideas about using Celf and linear logic in the context of interactivity.

Assumed background for this post: familiarity with linear logic connectives, some familiarity with logic programming and the forward/backward chaining distinction.

Here's an informal spec for a simple branching game with a couple pieces of global state that track whether certain paths have been taken:

The cellar and the den are connected rooms. The lamp and the key are in the den. The cellar contains a door, which is locked and must be opened with the key. Once the door is unlocked, the player enters darkness, and if they have a lamp, they win; otherwise, they are eaten by a grue and lose.

Here is the author's eye view of the game as we'd write it in Twine:

(You can interact with the output of the build here.)

The pieces of global state here are represented by two boolean variables ("<<set $x = ...>>" and "<<if$x eq ...>>" are Javascript macros under the hood). Double-brace [[links]] make a player control to navigate to the linked passage, and the edges in the graph show what's linked to what. This simple visual authoring tool gets a lot of people started writing their first games/interactive stories, and many have used it to build stunningly complex and evocative narratives.

Here's how I'd translate this game into a purely-atomic, forward chaining linear logic program, adding one rule at a time:

start -o {den & cellar}.

When the player begins, they choose to go to the den or the cellar.

den -o {cellar & getlamp & getkey}.

This works if we always want to display the option to get the lamp and the key, even after they've been retrieved. If we want to conditionally display those choices, we'd need a rule like this:

den -o
{cellar
& (nolamp -o {getlamp})
& (nokey -o {getkey})}.

We stay in the den if we try to get the key or the lamp.

getlamp -o {gotlamp * den}.
getkey -o {gotkey * den}.

Instead of changing a boolean value, we consume one piece of state and replace it with one intended as its complement (most logic programming languages don't support negation directly, but it's easy to encode with a predicate, so this is really just a shortcut).

cellar -o {den & opendoor}.

From the cellar, we can go back to the den or try to open the door.

opendoor * nokey -o {cellar * nokey}.
opendoor * gotkey -o {dark}.

Opening the door without the key keeps us in the cellar and retains the fact that we do not have the key. Opening the door with the key takes us to darkness.

dark * gotlamp -o {win}.
dark * nolamp -o {lose}.

If we have the lamp, we win; otherwise, we lose.

Okay, so far, all we've done is transcribe the game in a funnier-looking notation. Instead of trying to make arguments about linear logic versus imperative programming and their relative scalability for programming state, I want to point out something distinctly cool we can now do.

First, let's make this a properly parsing Celf program with rule names that might be somewhat informative. I've also added a reporting mechanism, breaking slightly from the atoms-only fragment with a dependent type whose index is w if we win and l if we lose. We also set up the initial state {nokey * nolamp * start}.

Now note the query at the bottom:

#query * * * 10 init -o {report X}.

This says (letting the first three arguments to query, which I can never remember, be whatever their defaults are) that we want to run the query 10 times, starting the state with init and succeeding if the final state reports a value.

Due to the nondeterminism in forward chaining, what this does is actually randomly simulate ten complete gameplay iterations. And if we've named our rules cleverly enough, the proof terms it outputs, such as the one below, show us exactly the trace of that gameplay:

Iteration 2
Solution: \X1. {
let {[X2, [X3, X4]]} = X1 in
let {X5} = start_to_den_or_cellar X4 in
let {X6} = den_to_cellar_lamp_or_key (X5 #1) in
let {X7} = cellar_to_den_or_door (X6 #1) in
let {[X8, X9]} = open_door_without_key [X7 #2, X2] in
let {X10} = cellar_to_den_or_door X8 in
let {X11} = den_to_cellar_lamp_or_key (X10 #1) in
let {[X12, X13]} = X11 #2 #1 X3 in
let {X14} = get_lamp X12 in
let {X15} = den_to_cellar_lamp_or_key X13 in
let {[X16, X17]} = X15 #2 #2 X9 in
let {X18} = get_key X16 in
let {X19} = den_to_cellar_lamp_or_key X17 in
let {X20} = cellar_to_den_or_door (X19 #1) in
let {X21} = den_to_cellar_lamp_or_key (X20 #1) in
let {X22} = cellar_to_den_or_door (X21 #1) in
let {X23} = den_to_cellar_lamp_or_key (X22 #1) in
let {X24} = cellar_to_den_or_door (X23 #1) in
let {X25} = open_door_with_key [X24 #2, X18] in
let {X26} = dark_with_lamp [X25, X14] in
let {X27} = report_win X26 in X27}
#X = w

Every "#n" indicates a choice between alternatives presented between "&"s, and tuples [X, Y] are the proof terms for tensored resources (A * B). What we get here is a linear sequence of moves carried out by a player, eventually resulting in a win or lose condition. And we get ten iterations to examine, effectively automatically fuzz-testing the playable space.

What's also interesting to consider is that the variable dependencies captured here can let us reconstruct a "causality graph", which shows the structure of this proof up to concurrent equality -- something of interest to computational narratology that I'm currently working on with Gwenn Bosser.

One thing to note is that the player bounces around a lot between the cellar and the den because that's always something available to do. How can we direct proof search to more accurately simulate a goal-driven player search strategy? One end of this spectrum is complete, playable interactivity, but I'd like to explore more points along the spectrum of automation to interaction.

## Monday, November 12, 2012

### Interactivity, week 3

Last week I discussed some newly-explored proof-theoretic relationships between reasoning about process calculi and substructural logics, and vaguely mentioned some other programming models that I wanted to compare. Since then, I think I've come up with a way to phrase my research objective with this project: to relate the programming and reasoning methodologies surrounding interactivity through a common medium of proof theory.

That's not a thesis statement, but I figure it's better than "*waves hands excitedly* interactivity! Processes! Linear logic! Focusing!"

Anyway. What I've done so far has given me more questions than answers, but I like the questions a lot. The first of these questions I want to discuss in this post is "How can proof theory capture an appropriate notion of broadcast message replication?" To explain what I mean by that, I'll describe an example which I think captures some common interactive communication patterns.

### Lights Out

Consider a simplified Lights Out puzzle, a 2x2 grid of lights where toggling one also toggles its neighbors. Here's an implementation of the game (modulo its win condition, which I'll get to later) in puredata, a visual language where boxes represent signal-varying values, and edges represent the paths that signals follow. A human can send a (unit) signal to a box by clicking on it.
(Does a screenshot count as source in a visual language? Should I also post the inscrutable text format it saves to? Oh well, here it is.)

A line between the bottom of box A and the top of box B means that whenever box A outputs a signal, box B receives it. The square (circleless) boxes are "toggles", i.e. they can be on or off, and the circles (boxes with circles in) are "bangs", which just send a unit signal on their output line when they receive any signal on their input line. In this sense, the toggles are the indicators of the light state, whereas the bangs are the switches. To see why this separation is necessary, consider just wiring the input and output lines between all the neighboring boxes directly (text source):
Clicking on a toggle changes its value and sends a signal on its output lines, which then toggles its neighbors, which then send signals on their output lines... which inclue one back to the source. This creates an infinite loop; Pd's stack overflows, and the rendering never (visibly) changes. The level of indirection -- the separation of switch and indicator -- is necessary to stop the control flow after one signal propagation.

Another way to view this problem is to consider its encoding in CLF (eliding initial type declarations):

handle_sighuman : sighuman N -o {sigin N}.

handle_sigin : sigin N * light N S * opp S S'
-o {light N S' * sigout N}.

handle_sigout : sigout N * udnbr N N1 * lrnbr N N2 -o
{sigin N1 * sigin N2}.

Most of the predicates are indexed by a light identifier usually written with prefix N. I expect most of the predicates to be self-explanatory except that the "udnbr" and "lrnbr" predicates indicate the "up/down neighbor" and "left/right neighbor" of the indicated light.

Indeed, a query such as

#query * * * 3
(light l1 on * light l2 on * light l3 on * light l4 on
* sighuman l1 * sighuman l3 * sighuman l2 * sighuman l4)
-o {light l1 X1 * light l2 X2 * light l3 X3 * light l4 X4}.

loops infinitely: the simulation never saturates.

The working version (corresponding to the working pd program) is encoded as follows:

handle_sighuman : sighuman N * lrnbr N N1 * udnbr N N2
-o {sigin N * sigin N1 * sigin N2}.

handle_sigin : sigin N * light N S * opp S S'
-o {light N S'}.

Output signals from toggles are no longer needed; the human signal just directly toggles each neighbor. The input signal then just flips the indicator and emits no output (it could do so, but it would never be used).

### Programmatic Broadcast: use case for positive ∀?

This encoding has made a seemingly complicated interaction pattern much less interesting: it's now just a bunch of broadcasts from a single source that affect overlapping sets of receivers.

But there's still a component of this interaction pattern that's difficult to express in linear logic/CLF, which has to do with the fact that I've gotten away with a pretty big cheat by limiting the puzzle to a 2x2 grid: every cell uniformly has one udnbr and one lrnbrIn a bigger puzzle, cells can have two, three, or four neighbors. Ideally, we could program the neighbor relation separately, and have a toggle trigger the input signals of all its neighbors.

This is a sort of message copying, but it isn't the standard repeat described by ! in linear logic. Instead, it's a broadcast:

Instead of indicating infinitary nature of the message itself, we'd like to be able to describe the set of recipients infinitarily (even if they are in fact finite). We might think to use a universal quantifier, and write something like the following CLF (read Pi as forall) to broadcast to all neighbors:

handle : sigout N -o {Pi x. nbr x N -o sigin x}.

...but in the semantics of proof search, all this does is let us nondeterministically pick one neighbor to propagate an input signal to. We can instantiate the quantifier with any node N', and supposing we have that nbr N' N, the quantified statement will be replaced with sigin N', its original form discarded, unable to be used by other neighbors. (Furthermore, the quantifier might even be applied to a node that isn't a neighbor, leaving a useless implication in the context indefinitely!) And making the quantified statement replicating (with !) would be no better: since nbr facts are persistent, the same neighbor could instantiate the quantifier arbitrarily many times, making the message both repeated and broadcast.

In the sense that a universal quantifier is a form of infinitary conjunction, this suggests to me that CLF's Pi is like infinitary & ("with", additive conjunction in linear logic), in that proof search can choose how to quantify it, but it can't make several simultaneous choices. What we want for this behavior is something like an infinitary $\otimes$, i.e. a positive universal quantifier. Jeff Sarnat informed me that some literature on bunched implications explores such a quantifier, but that it is folklorically known broken; I'm afraid I'm not well-versed enough in BI to understand why, but would be interested to try if someone can explain. (My
understanding of BI itself is pretty limited.)

### Alternative to instance parameters?

Another slightly kludgy thing about the CLF implementation -- although it's a cute dependent types hack -- is the instance parameters, i.e. the fact that every predicate takes an argument to indicate which toggle it's talking about. Again (and this goes back to my initial comments about modularity and interface boundaries) it would be nice if we could describe the lightswitch interface generically and indicate separately the relationships between several instances of them. This begins to sound suspiciously similar to (my nebulous understanding of) the early, message-passing-based part of object oriented languages' history, and I believe puredata itself is based on an object model. Accounting for some limited use cases of objects like this might not be outside the scope of this project.

## Monday, November 5, 2012

### Interactivity, week 2

My post about week 1 established a new research project I'm working on involving programming interactivity with substructural logic. This past week, I've been working on digesting some background and related work. One thing I did was spend a good long stretch working through the core ideas in Yuxin, Iliano, and Rob's paper Relating Reasoning Methods in Linear Logic and Process Algebra.

### Linear contexts as processes

The point of the paper is to establish that a so-called "logical preorder" on contexts $\Delta \preceq_l \Delta'$ defined (I'm simplifying to the strictly linear case1) as

${For}\ {all}\ C, \Delta \vdash C \ {implies}\ \Delta' \vdash C$

where $\vdash$ is entailment in linear logic, corresponds to traditional notions of contextual preorder in CCS/process algebras, which are defined coinductively by a reduction relation extended over a laundry list of structure-preserving properties. The symmetric closure of the contextual preorder characterizes observational equivalence of processes.

A few things struck me as interesting in this development:

• The claim (or really several claims after unpacking) "Miller identiﬁed a fragment of linear logic that could be used to observe traces in his logical encoding of the π-calculus, thereby obtaining a language that corresponds to the Hennessy-Milner modal logic, which characterizes observational equivalence." I didn't know what Hennessy-Milner modal logic was, so I looked it up; it turns out to be a sort of temporal-ish Kripke modal logic where the [box] and [dia] operators are indexed by actions (i.e. send and receive, I suppose). p |= [a]F means "state p satisfies F, and any transition resulting from doing 'a' will result in a state satisfying F" and p |= <a>F means "there exists a state q reachable from p along a s.t. q |= F". This strikes me as potentially related to Rob's generative invariants, although a meeting today convinces me I understand these a lot less well than I thought I did.
• The intermediate labeled transition system on which they defined something called the simulation preorder to mediate the logical and contextual preorder. This labeled transition system's judgment takes the form Δ -β-> Δ', where β can be "silent" (an internal state change), or a send or receive of an atomic proposition, from or to an external process. The only "communication" rules are

$\frac{}{\Delta, a \longrightarrow^{!a} \Delta} \qquad \frac{} {\Delta, a \multimap B \longrightarrow^{?a} \Delta, B}$
$\frac {\Delta_1\longrightarrow^{!a}\Delta_1'\qquad\Delta_2\longrightarrow^{?a}\Delta_2'} {\Delta_1, \Delta_2 \longrightarrow \Delta_1',\Delta_2'}$

These seem dissatisfying or noncanonical somehow. The analogy to session types, if it ever existed, vanishes: the "choice" operator & is silent (internal), and there is no corresponding external choice (+). I could imagine somehow admitting this interaction by rules something like
$\frac{}{\Delta, x:A_1\oplus A_2 \longrightarrow^{!x.i} \Delta, x:A_i} \qquad \frac{}{\Delta, x:A_1\& A_2\longrightarrow^{?x.i} \Delta, x:A_i}$

but I am not sure if these (particularly the + rule) would satisfy the theorems wrt the logical preorder.

Anyway my high-level head-in-the-clouds digestion and re-enmanglement of this work is that, while the session types view of processes is as variables in a linear context offering or using certain services, interpretable as a set of value streams, one might also use a linear logic context to model a single process, interpretable as a stream of value sets.

Why am I thinking about streams in the first place?

### Functional reactive programming!

Hey, it turns out there's a beginner-level, interactive tutorial on FRP now! As it says, the computational model of FRP is to give a bunch of combinators over signals, which are time-varying values. Elm looks super cool, and this is basically an approach in the functional to the kind of programming I'm interested in for this research. I've also got the source to Neel's top-secret compiler based on this work and this successor, which address the problem that "the intuitively appealing idea of modelling A-valued signals as elements of the stream type Aω ... and reactive systems as stream functions Inputω -> Outputω does not rule out systems that violate causality (the output today can depend upon the input tomorrow) or reactivity (ill-founded feedback can lead to undeﬁned behaviour)."

As a philosophical aside, I've been trying to ask myself what value a logic programming approach would add, instead of, say, working directly on FRP. One answer, I think, is just a page out of the same book we've been writing as a research group for at least as long as I've been a part of it: underlying these programming models are some deep logical ideas that keep reappearing, and the better we understand how those ideas relate to one another, the better we can design full-featured languages around them; the faster all ideas can advance. Thinking about an interactive program as interactive proof search means we can apply the full arsenal of techniques and theory we have about proof search to the act of programming. It's the reasoning tools this has the potential to give us (imagine, for example, model checking a game, or automatically generating simulations of player input) that makes me interested in exploring it from the (rather stripped-down) standpoint of logic programming in well-understood logics. For example, I wonder (partly due to Neel's explicit suggestion) whether the semantics/reasoning approaches we might use for dealing with the nondeterminism in a forward-chaining logic program might be related to Neel and Nick's model of nondeterministic user input streams.

### Communication graph programming

From a completely different angle, as prompted by Jim McCann, I've been looking a bit at puredata (an open source relative of Max/MSP, which is used for programming audio and, to some extent, graphics/animation. I really don't know much about what kinds of things one can expect to build with it, but I intend to do some experimenting and playing with examples. The tutorial level examples for puredata feel a lot like a "visual" version of functional reactive programming (and I don't think the "visual" nature of the language is really essential to its theory; arguing about whether visual languages make any sense is, to me, a matter of taste in syntax, i.e. bikeshedding). One data source might be a microphone or a generated sign wave, and then that signal can be connected via "wires" to various filters and other functions.

What's interesting about this model to me is that it lets you directly configure the communication graph. It may not make much difference to a program's semantics whether such a thing is explicit or inferred, but for a programmer to reason about it, I think it makes a lot of sense -- certainly there are PhD's worth of eclipse plugins devoted to helping programmers to exactly that.

It would be interesting, I think, to see if functional reactive programs could have a "view" like this (or if the programmatic representation of puredata could be FRP).

### Narrowing focus

My goal is to spend this week isolating a specific informative example that I can try to model from each of these points of view, as in actually write a functional reactive program, a communication graph, and a linear logic program, that each adequately represent it. I'm thinking some sort of simple, discrete game, like maybe a 16 puzzle or Lights Out.

---

1Actually, the version in the paper not only adds a persistent context but also quantifies over all extensions of the context, i.e. defines the preorder $\Delta \preceq_l \Delta'$ as
${For}\ {all}\ C\ {and}\ \Delta'',$
$\Delta, \Delta'' \vdash C \ {implies}\ \Delta', \Delta'' \vdash C$

...but I'm not sure if it matters -- at least the theorems I checked don't seem to depend on this extension.