Posts

Showing posts with the label logic programming

Interactivity, week 3

Image
Last week  I discussed some newly-explored proof-theoretic relationships between reasoning about process calculi and substructural logics, and vaguely mentioned some other programming models that I wanted to compare. Since then, I think I've come up with a way to phrase my research objective with this project:  to relate the programming and reasoning methodologies surrounding interactivity through a common medium of proof theory. That's not a thesis statement, but I figure it's better than "*waves hands excitedly* interactivity! Processes! Linear logic! Focusing!" Anyway. What I've done so far has given me more questions than answers, but I like the questions a lot. The first of these questions I want to discuss in this post is "How can proof theory capture an appropriate notion of broadcast  message replication?" To explain what I mean by that, I'll describe an example which I think captures some common interactive communication patterns. ...

Interactivity, week 2

My post about week 1 established a new research project I'm working on involving programming interactivity with substructural logic. This past week, I've been working on digesting some background and related work. One thing I did was spend a good long stretch working through the core ideas in Yuxin, Iliano, and Rob's paper  Relating Reasoning Methods in Linear Logic and Process Algebra . Linear contexts as processes The point of the paper is to establish that a so-called "logical preorder" on contexts \[ \Delta \preceq_l \Delta' \] defined (I'm simplifying to the strictly linear case 1 ) as \[ {For}\ {all}\ C, \Delta \vdash C \ {implies}\  \Delta' \vdash C \] where \[ \vdash \] is entailment in linear logic, corresponds to traditional notions of contextual preorder  in CCS/process algebras, which are defined coinductively by a reduction relation extended over a laundry list of structure-preserving properties. The symmetric closure of th...

Programming interactivity with substructural logic

This will be the first of four posts in which I attempt to organize my thoughts about an idea for a project. It's related to  logic programming interactive fiction  and the  OBT talk  I gave this past January, but I'm trying to extend the ideas beyond interactive fiction -- which represents one particular interaction pattern -- into more general process-calculus-ish terms. Rob Simmons, who just defended a thesis that I take a great deal of inspiration from, has a  good primer on substructural logic programming  over on his blog. To review briefly, the key concepts I'm building on are: logic programming , in which a program is specified by declaring several atomic base types and a collection of logical propositions describing how they relate to one another, and a particular proof search strategy determines the semantics of its execution; forward-chaining  or bottom-up  logic programming, a particular proof search (i.e. execution strategy) wh...

#TWIL: monads, parallelism, geometry

It's been a high-input, low-output sort of week. In the interest of accountability and perhaps higher retention, I'll try to summarize some of the things I've learned. Themes include monads, parallelism, and geometry. :) Umut Acar on Dynamic Parallel Programming Umut describes two ideas from CMU theses with which I'm familiar, parallel cost semantics using computation graphs  (Daniel Spoonhower) and self-adjusting computation  ( Umut himself , though I'm familiar with it through Ruy Ley-Wild). In the former, one can analyze the parallelism of an algorithm as a function of the dependencies between computations; the algorithm is characterized in terms of work, the  total cost, and depth, the  length of the longest critical path through the dependency graph. Bob Harper wrote a nice post summarizing these ideas . With self-adjusting (or dynamic ) computation, the idea is that you can re-run a program on slightly-changed inputs and, if the problem is stable ...
In Dale's Proof Search Foundations for Logic Programming talk, he gives a great minimalist example of an interactive system: toggling a switch. The encoding in Forum , a logic programming language based on full linear logic, goes like this: flip on off. flip off on. toggle G o- sw V * flip V U * (sw U -o G) or in logic, \[ \exists u,v.[sw\ v\otimes flip\ v\ u \otimes (sw\ u \multimap G)] \multimap toggle\ G \] (Yes, I mainly did that to test the $$\LaTeX$$ rendering script I'm using.) The talk I linked to gives a derivation (which I won't painstakingly copy here) illustrating the execution of the program: the goal sequent $$\Gamma; \Delta, sw\ on \longrightarrow toggle\ G$$ evolves through proof search into $$\Gamma; \Delta, sw\ off \longrightarrow G$$. But this is something of a counterintuitive, higher-order encoding. The "action" by a hypothetical user of toggling the switch must be indexed by $$G$$, the goal that continues the program. Her...

Logic Programming for Interactive Fiction

As a teaser-trailer to the ideas I have been long overdue to write up, allow me to play some silly games with words. All claims herein are nontechnical and unsubstantiated. PL theorists are fond of the snowclone X-as-Y, "props-as-types" and "proofs-as-programs" being the pervasive example in functional programming. When we similarly try to explain logic programming logically , what comes out is more like "props as programs" and "proofs as traces ". Put another way, in functional progamming, computation is proof reduction while in logic programming, computation is proof search . So, then, what is a game? (I mean in the sense of "something humans play for fun," and specifically "as a piece of software", not the game-theory sense.*) At a rough gloss, a game is an interactive program. An execution of a game alternates computation and accepting human input. Interactive fiction is a minimal distillation of the idea (in that it...