Interactivity, week 2

My post about week 1 established a new research project I'm working on involving programming interactivity with substructural logic. This past week, I've been working on digesting some background and related work. One thing I did was spend a good long stretch working through the core ideas in Yuxin, Iliano, and Rob's paper Relating Reasoning Methods in Linear Logic and Process Algebra.

Linear contexts as processes

The point of the paper is to establish that a so-called "logical preorder" on contexts \[
\Delta \preceq_l \Delta'
\] defined (I'm simplifying to the strictly linear case1) as

{For}\ {all}\ C, \Delta \vdash C \ {implies}\  \Delta' \vdash C

where \[
\] is entailment in linear logic, corresponds to traditional notions of contextual preorder in CCS/process algebras, which are defined coinductively by a reduction relation extended over a laundry list of structure-preserving properties. The symmetric closure of the contextual preorder characterizes observational equivalence of processes.

A few things struck me as interesting in this development:

  • The claim (or really several claims after unpacking) "Miller identified a fragment of linear logic that could be used to observe traces in his logical encoding of the π-calculus, thereby obtaining a language that corresponds to the Hennessy-Milner modal logic, which characterizes observational equivalence." I didn't know what Hennessy-Milner modal logic was, so I looked it up; it turns out to be a sort of temporal-ish Kripke modal logic where the [box] and [dia] operators are indexed by actions (i.e. send and receive, I suppose). p |= [a]F means "state p satisfies F, and any transition resulting from doing 'a' will result in a state satisfying F" and p |= <a>F means "there exists a state q reachable from p along a s.t. q |= F". This strikes me as potentially related to Rob's generative invariants, although a meeting today convinces me I understand these a lot less well than I thought I did.
  • The intermediate labeled transition system on which they defined something called the simulation preorder to mediate the logical and contextual preorder. This labeled transition system's judgment takes the form Δ -β-> Δ', where β can be "silent" (an internal state change), or a send or receive of an atomic proposition, from or to an external process. The only "communication" rules are

\frac{}{\Delta, a \longrightarrow^{!a} \Delta}
{\Delta, a \multimap B \longrightarrow^{?a} \Delta, B}
{\Delta_1, \Delta_2 \longrightarrow \Delta_1',\Delta_2'}

These seem dissatisfying or noncanonical somehow. The analogy to session types, if it ever existed, vanishes: the "choice" operator & is silent (internal), and there is no corresponding external choice (+). I could imagine somehow admitting this interaction by rules something like
\frac{}{\Delta, x:A_1\oplus A_2 \longrightarrow^{!x.i} \Delta, x:A_i}
\frac{}{\Delta, x:A_1\& A_2\longrightarrow^{?x.i} \Delta, x:A_i}

but I am not sure if these (particularly the + rule) would satisfy the theorems wrt the logical preorder.

Anyway my high-level head-in-the-clouds digestion and re-enmanglement of this work is that, while the session types view of processes is as variables in a linear context offering or using certain services, interpretable as a set of value streams, one might also use a linear logic context to model a single process, interpretable as a stream of value sets.

Why am I thinking about streams in the first place?

Functional reactive programming!

Hey, it turns out there's a beginner-level, interactive tutorial on FRP now! As it says, the computational model of FRP is to give a bunch of combinators over signals, which are time-varying values. Elm looks super cool, and this is basically an approach in the functional to the kind of programming I'm interested in for this research. I've also got the source to Neel's top-secret compiler based on this work and this successor, which address the problem that "the intuitively appealing idea of modelling A-valued signals as elements of the stream type Aω ... and reactive systems as stream functions Inputω -> Outputω does not rule out systems that violate causality (the output today can depend upon the input tomorrow) or reactivity (ill-founded feedback can lead to undefined behaviour)."

As a philosophical aside, I've been trying to ask myself what value a logic programming approach would add, instead of, say, working directly on FRP. One answer, I think, is just a page out of the same book we've been writing as a research group for at least as long as I've been a part of it: underlying these programming models are some deep logical ideas that keep reappearing, and the better we understand how those ideas relate to one another, the better we can design full-featured languages around them; the faster all ideas can advance. Thinking about an interactive program as interactive proof search means we can apply the full arsenal of techniques and theory we have about proof search to the act of programming. It's the reasoning tools this has the potential to give us (imagine, for example, model checking a game, or automatically generating simulations of player input) that makes me interested in exploring it from the (rather stripped-down) standpoint of logic programming in well-understood logics. For example, I wonder (partly due to Neel's explicit suggestion) whether the semantics/reasoning approaches we might use for dealing with the nondeterminism in a forward-chaining logic program might be related to Neel and Nick's model of nondeterministic user input streams.

Communication graph programming

From a completely different angle, as prompted by Jim McCann, I've been looking a bit at puredata (an open source relative of Max/MSP, which is used for programming audio and, to some extent, graphics/animation. I really don't know much about what kinds of things one can expect to build with it, but I intend to do some experimenting and playing with examples. The tutorial level examples for puredata feel a lot like a "visual" version of functional reactive programming (and I don't think the "visual" nature of the language is really essential to its theory; arguing about whether visual languages make any sense is, to me, a matter of taste in syntax, i.e. bikeshedding). One data source might be a microphone or a generated sign wave, and then that signal can be connected via "wires" to various filters and other functions.

What's interesting about this model to me is that it lets you directly configure the communication graph. It may not make much difference to a program's semantics whether such a thing is explicit or inferred, but for a programmer to reason about it, I think it makes a lot of sense -- certainly there are PhD's worth of eclipse plugins devoted to helping programmers to exactly that.

It would be interesting, I think, to see if functional reactive programs could have a "view" like this (or if the programmatic representation of puredata could be FRP).

Narrowing focus 

My goal is to spend this week isolating a specific informative example that I can try to model from each of these points of view, as in actually write a functional reactive program, a communication graph, and a linear logic program, that each adequately represent it. I'm thinking some sort of simple, discrete game, like maybe a 16 puzzle or Lights Out.


1Actually, the version in the paper not only adds a persistent context but also quantifies over all extensions of the context, i.e. defines the preorder \[
\Delta \preceq_l \Delta'
\] as
{For}\ {all}\ C\ {and}\ \Delta'',\]
 \Delta, \Delta'' \vdash C \ {implies}\  \Delta', \Delta'' \vdash C

...but I'm not sure if it matters -- at least the theorems I checked don't seem to depend on this extension.


Popular posts from this blog

Using Twine for Games Research (Part II)

Reading academic papers while having ADHD

Using Twine for Games Research (Part III)