Skip to main content

#TWIL: monads, parallelism, geometry

It's been a high-input, low-output sort of week. In the interest of accountability and perhaps higher retention, I'll try to summarize some of the things I've learned. Themes include monads, parallelism, and geometry. :)

Umut Acar on Dynamic Parallel Programming


Umut describes two ideas from CMU theses with which I'm familiar, parallel cost semantics using computation graphs (Daniel Spoonhower) and self-adjusting computation (Umut himself, though I'm familiar with it through Ruy Ley-Wild). In the former, one can analyze the parallelism of an algorithm as a function of the dependencies between computations; the algorithm is characterized in terms of work, the total cost, and depth, the length of the longest critical path through the dependency graph. Bob Harper wrote a nice post summarizing these ideas. With self-adjusting (or dynamic) computation, the idea is that you can re-run a program on slightly-changed inputs and, if the problem is stable (small input changes correspond to small output changes), the subsequent running time should be proportional to the change. That is, not only do similar inputs relate their outputs, but also their computation traces.

Umut's observation is that both of these ideas are expressible by treating execution as a thing and in fact they exploit the same properties of that thing: minimized dependency. A good parallel program is usually stable. It also shakes out that when stability fails -- when a small input change incurs lots of changes in the computation graph -- those changes can be parallelized. (I don't remember the details of this claim, unfortunately.) Umut extols the combination of these ideas as a harmonious framework for computing over "big data".

Frank Pfenning on Parallelism in Celf


In his Linear Logic course, Frank has been telling us about Celf, a logic programming language that supports programmer-controlled interpolation of forward and backward chaining. I think of applications of (pure) forward chaining as simulations: the context represents a world or state of some sort, and the programmer describes rules for how pieces of that state evolve, without minding any particular goal. It's "exploratory mode" in a video game. Examples include interpreters/evaluators, Turing machines, and cellular automata.  Backward chaining allows us to describe semantics and search problems; we want to break down a goal into subcomponents right away and only use a hypothesis if we know it's useful to us (i.e. its head is an immediate subgoal). Examples include type checkers and game solvers. But many algorithms -- and I'm not sure if there's a way to precisely classify them -- really want both directions of proof search involved; a graph-coloring algorithm, for instance, has a clear goal, which we can describe through backward-chaining rules that "set up" the problem; but the actual coloring process makes much more sense in forward chaining terms -- and at the end, we can check whether the coloring meets the goal (i.e. no two adjacent nodes are the same color). One controls the switch from backward to forward chaining in the program with a forward-chaining monad spelled {A}. The formula in {} will be searched for only after forward chaining is run to saturation.

A nice consequence of the "proof search as computation" slogan of logic programming is "executions as traces." In particular, the finely-controlled, focusing-based proof search in Celf yields very precise traces that allow easy analysis as a computation graph: a Celf trace is a sequence of let-bindings that match on forward-chaining computation followed by a term at the end representing the goal; we need only track the variables that such a binding depends on and produces, then check a simple condition on those sets to "disentangle" the sequence into a computation DAG.

One thing I wonder (and asked Umut to ponder a bit over coffee) is whether the "inherent parallelism" in Celf might translate to "inherent dynamic computation" for the same reason. What would it take to have Celf-adjusting Computation, as it were?

Tea conversation: hulls and monads


At our weekly departmental Tea, Graphics Student Nico brought up something he'd been thinking about recently in his research: hull operators. Having only heard of hull as in "convex hulls" (which incidentally Umut's talk used for a bunch of examples), I learned that as a more general concept a hull is anything that covers the set of points, e.g. the universe or a bounding box (each of which is less "tight" than a convex hull). Then he proceeded to axiomatize hull operators h:

1) They must be exstensive, i.e. cover the set of points, i.e. X ⊆ h(X)
2) They must be monotonic, i.e. X ⊆ Y ⇒ h(X) ⊆ h(Y)
3) They must be idempotent, i.e h(h(X)) = h(X)

About midway through point two, I was like, "...wait wait wait" and got out a notebook so I could pencil them down and confirm my suspicions: yep, it's a monad over the subset-inclusion category. Wikipedia notes this offhandedly, but I wonder if it has any computational implications. For instance, monotonicity seems related to stability for the problem of calculating the hull.

---

Well, I had two more subheadings that pulled abstract interpretation into my mix of themes (meeting with Patrick Cousot; Carten Schurmann's presentation of "truthful monadic abstractions") but I'm actually about to see Dr. Cousot's distinguished lecture, so perhaps I'll hold off til afterwards!

Comments

  1. Hypothesis Testing
    Define Hypothesis, what is Hypothesis? Define Hypothesis Testing, null Hypothesis,
    http://www.infoaw.com/article.php?articleId=952

    ReplyDelete

Post a Comment

Popular posts from this blog

Using Twine for Games Research (Part II)

This preliminary discussion introduced my thoughts on using Twine as a tool for creating prototypes for games research. I'll start with documenting my first case study: a hack-and-slash RPG-like setting where the player character has a record of attributes ("stats") that evolve through actions that turn certain resources (money, health, items) into others. I've selected this hack-and-slash example because it falls outside the canonical "branching story" domain thought to be Twine's primary use case, but it is not too much trickier to implement. It relies crucially on the management of state in ways that simple branching stories would not, but it does so in a fairly straightforward way.

If all goes well, this post may also serve as a tutorial on the "basics" of Twine (links + variables + expressions). In particular, I'll be using Twine 2/Harlowe, and I haven't seen many tutorials for this new version published yet.

To me, the main "…

Using Twine for Games Research (Part III)

Where we last left off, I described Twine's basic capabilities and illustrated how to use them in Twine 2 by way of a tiny hack-and-slash RPG mechanic. You can play the result, and you should also be able to download that HTML file and use Twine 2's "import file" mechanism to load the editable source code/passage layout.

Notice that, in terms of game design, it's not much more sophisticated than a slot machine: the only interesting decision we've incorporated is for the player to determine when to stop pushing her luck with repeated adventures and go home with the current spoils.

What makes this type of RPG strategy more interesting to me is the sorts of decisions that can have longer-term effects, the ones where you spend an accumulation of resources on one of several things that might have a substantial payoff down the road. In a more character-based setting, this could be something like increasing skill levels or adding personality traits.

Often, the game-…

Why I don't like the term "AI"

Content note: I replicate some ableist language in this post for the sake of calling it out as ableist.

In games research, some people take pains to distinguish artificial intelligence from computational intelligence (Wikipedia summary), with the primary issue being that AI cares more about replicating human behavior, while CI is "human-behavior-inspired" approaches to solving concrete problems. I don't strongly identify with one of these sub-areas more than the other; the extent to which I hold an opinion is mainly that I find the distinction a bit silly, given that the practical effects seem mainly to be that there are two conferences (CIG and AIIDE) that attract the same people, and a journal (TCIAIG - Transactions on Computational Intelligence and Artificial Intelligence in Games) that seems to resolve the problem by replacing instances of "AI" with "CI/AI."

I have a vague, un-citeable memory of hearing another argument from people who dislike the…