Skip to main content

Interactivity, week 2

My post about week 1 established a new research project I'm working on involving programming interactivity with substructural logic. This past week, I've been working on digesting some background and related work. One thing I did was spend a good long stretch working through the core ideas in Yuxin, Iliano, and Rob's paper Relating Reasoning Methods in Linear Logic and Process Algebra.

Linear contexts as processes


The point of the paper is to establish that a so-called "logical preorder" on contexts \[
\Delta \preceq_l \Delta'
\] defined (I'm simplifying to the strictly linear case1) as

\[
{For}\ {all}\ C, \Delta \vdash C \ {implies}\  \Delta' \vdash C
\]

where \[
\vdash
\] is entailment in linear logic, corresponds to traditional notions of contextual preorder in CCS/process algebras, which are defined coinductively by a reduction relation extended over a laundry list of structure-preserving properties. The symmetric closure of the contextual preorder characterizes observational equivalence of processes.


A few things struck me as interesting in this development:

  • The claim (or really several claims after unpacking) "Miller identified a fragment of linear logic that could be used to observe traces in his logical encoding of the π-calculus, thereby obtaining a language that corresponds to the Hennessy-Milner modal logic, which characterizes observational equivalence." I didn't know what Hennessy-Milner modal logic was, so I looked it up; it turns out to be a sort of temporal-ish Kripke modal logic where the [box] and [dia] operators are indexed by actions (i.e. send and receive, I suppose). p |= [a]F means "state p satisfies F, and any transition resulting from doing 'a' will result in a state satisfying F" and p |= <a>F means "there exists a state q reachable from p along a s.t. q |= F". This strikes me as potentially related to Rob's generative invariants, although a meeting today convinces me I understand these a lot less well than I thought I did.
  • The intermediate labeled transition system on which they defined something called the simulation preorder to mediate the logical and contextual preorder. This labeled transition system's judgment takes the form Δ -β-> Δ', where β can be "silent" (an internal state change), or a send or receive of an atomic proposition, from or to an external process. The only "communication" rules are

\[
\frac{}{\Delta, a \longrightarrow^{!a} \Delta}
\qquad
\frac{}
{\Delta, a \multimap B \longrightarrow^{?a} \Delta, B}
\]
\[
\frac
{\Delta_1\longrightarrow^{!a}\Delta_1'\qquad\Delta_2\longrightarrow^{?a}\Delta_2'}
{\Delta_1, \Delta_2 \longrightarrow \Delta_1',\Delta_2'}
\]

These seem dissatisfying or noncanonical somehow. The analogy to session types, if it ever existed, vanishes: the "choice" operator & is silent (internal), and there is no corresponding external choice (+). I could imagine somehow admitting this interaction by rules something like
\[
\frac{}{\Delta, x:A_1\oplus A_2 \longrightarrow^{!x.i} \Delta, x:A_i}
\qquad
\frac{}{\Delta, x:A_1\& A_2\longrightarrow^{?x.i} \Delta, x:A_i}
\]

but I am not sure if these (particularly the + rule) would satisfy the theorems wrt the logical preorder.

Anyway my high-level head-in-the-clouds digestion and re-enmanglement of this work is that, while the session types view of processes is as variables in a linear context offering or using certain services, interpretable as a set of value streams, one might also use a linear logic context to model a single process, interpretable as a stream of value sets.

Why am I thinking about streams in the first place?

Functional reactive programming!


Hey, it turns out there's a beginner-level, interactive tutorial on FRP now! As it says, the computational model of FRP is to give a bunch of combinators over signals, which are time-varying values. Elm looks super cool, and this is basically an approach in the functional to the kind of programming I'm interested in for this research. I've also got the source to Neel's top-secret compiler based on this work and this successor, which address the problem that "the intuitively appealing idea of modelling A-valued signals as elements of the stream type Aω ... and reactive systems as stream functions Inputω -> Outputω does not rule out systems that violate causality (the output today can depend upon the input tomorrow) or reactivity (ill-founded feedback can lead to undefined behaviour)."

As a philosophical aside, I've been trying to ask myself what value a logic programming approach would add, instead of, say, working directly on FRP. One answer, I think, is just a page out of the same book we've been writing as a research group for at least as long as I've been a part of it: underlying these programming models are some deep logical ideas that keep reappearing, and the better we understand how those ideas relate to one another, the better we can design full-featured languages around them; the faster all ideas can advance. Thinking about an interactive program as interactive proof search means we can apply the full arsenal of techniques and theory we have about proof search to the act of programming. It's the reasoning tools this has the potential to give us (imagine, for example, model checking a game, or automatically generating simulations of player input) that makes me interested in exploring it from the (rather stripped-down) standpoint of logic programming in well-understood logics. For example, I wonder (partly due to Neel's explicit suggestion) whether the semantics/reasoning approaches we might use for dealing with the nondeterminism in a forward-chaining logic program might be related to Neel and Nick's model of nondeterministic user input streams.

Communication graph programming


From a completely different angle, as prompted by Jim McCann, I've been looking a bit at puredata (an open source relative of Max/MSP, which is used for programming audio and, to some extent, graphics/animation. I really don't know much about what kinds of things one can expect to build with it, but I intend to do some experimenting and playing with examples. The tutorial level examples for puredata feel a lot like a "visual" version of functional reactive programming (and I don't think the "visual" nature of the language is really essential to its theory; arguing about whether visual languages make any sense is, to me, a matter of taste in syntax, i.e. bikeshedding). One data source might be a microphone or a generated sign wave, and then that signal can be connected via "wires" to various filters and other functions.

What's interesting about this model to me is that it lets you directly configure the communication graph. It may not make much difference to a program's semantics whether such a thing is explicit or inferred, but for a programmer to reason about it, I think it makes a lot of sense -- certainly there are PhD's worth of eclipse plugins devoted to helping programmers to exactly that.

It would be interesting, I think, to see if functional reactive programs could have a "view" like this (or if the programmatic representation of puredata could be FRP).

Narrowing focus 


My goal is to spend this week isolating a specific informative example that I can try to model from each of these points of view, as in actually write a functional reactive program, a communication graph, and a linear logic program, that each adequately represent it. I'm thinking some sort of simple, discrete game, like maybe a 16 puzzle or Lights Out.

---

1Actually, the version in the paper not only adds a persistent context but also quantifies over all extensions of the context, i.e. defines the preorder \[
\Delta \preceq_l \Delta'
\] as
\[
{For}\ {all}\ C\ {and}\ \Delta'',\]
\[
 \Delta, \Delta'' \vdash C \ {implies}\  \Delta', \Delta'' \vdash C
\]

...but I'm not sure if it matters -- at least the theorems I checked don't seem to depend on this extension.

Comments

Popular posts from this blog

Using Twine for Games Research (Part II)

This preliminary discussion introduced my thoughts on using Twine as a tool for creating prototypes for games research. I'll start with documenting my first case study: a hack-and-slash RPG-like setting where the player character has a record of attributes ("stats") that evolve through actions that turn certain resources (money, health, items) into others. I've selected this hack-and-slash example because it falls outside the canonical "branching story" domain thought to be Twine's primary use case, but it is not too much trickier to implement. It relies crucially on the management of state in ways that simple branching stories would not, but it does so in a fairly straightforward way.

If all goes well, this post may also serve as a tutorial on the "basics" of Twine (links + variables + expressions). In particular, I'll be using Twine 2/Harlowe, and I haven't seen many tutorials for this new version published yet.

To me, the main "…

Using Twine for Games Research (Part III)

Where we last left off, I described Twine's basic capabilities and illustrated how to use them in Twine 2 by way of a tiny hack-and-slash RPG mechanic. You can play the result, and you should also be able to download that HTML file and use Twine 2's "import file" mechanism to load the editable source code/passage layout.

Notice that, in terms of game design, it's not much more sophisticated than a slot machine: the only interesting decision we've incorporated is for the player to determine when to stop pushing her luck with repeated adventures and go home with the current spoils.

What makes this type of RPG strategy more interesting to me is the sorts of decisions that can have longer-term effects, the ones where you spend an accumulation of resources on one of several things that might have a substantial payoff down the road. In a more character-based setting, this could be something like increasing skill levels or adding personality traits.

Often, the game-…

Why I don't like the term "AI"

Content note: I replicate some ableist language in this post for the sake of calling it out as ableist.

In games research, some people take pains to distinguish artificial intelligence from computational intelligence (Wikipedia summary), with the primary issue being that AI cares more about replicating human behavior, while CI is "human-behavior-inspired" approaches to solving concrete problems. I don't strongly identify with one of these sub-areas more than the other; the extent to which I hold an opinion is mainly that I find the distinction a bit silly, given that the practical effects seem mainly to be that there are two conferences (CIG and AIIDE) that attract the same people, and a journal (TCIAIG - Transactions on Computational Intelligence and Artificial Intelligence in Games) that seems to resolve the problem by replacing instances of "AI" with "CI/AI."

I have a vague, un-citeable memory of hearing another argument from people who dislike the…