#TWIL: monads, parallelism, geometry

It's been a high-input, low-output sort of week. In the interest of accountability and perhaps higher retention, I'll try to summarize some of the things I've learned. Themes include monads, parallelism, and geometry. :)

Umut Acar on Dynamic Parallel Programming


Umut describes two ideas from CMU theses with which I'm familiar, parallel cost semantics using computation graphs (Daniel Spoonhower) and self-adjusting computation (Umut himself, though I'm familiar with it through Ruy Ley-Wild). In the former, one can analyze the parallelism of an algorithm as a function of the dependencies between computations; the algorithm is characterized in terms of work, the total cost, and depth, the length of the longest critical path through the dependency graph. Bob Harper wrote a nice post summarizing these ideas. With self-adjusting (or dynamic) computation, the idea is that you can re-run a program on slightly-changed inputs and, if the problem is stable (small input changes correspond to small output changes), the subsequent running time should be proportional to the change. That is, not only do similar inputs relate their outputs, but also their computation traces.

Umut's observation is that both of these ideas are expressible by treating execution as a thing and in fact they exploit the same properties of that thing: minimized dependency. A good parallel program is usually stable. It also shakes out that when stability fails -- when a small input change incurs lots of changes in the computation graph -- those changes can be parallelized. (I don't remember the details of this claim, unfortunately.) Umut extols the combination of these ideas as a harmonious framework for computing over "big data".

Frank Pfenning on Parallelism in Celf


In his Linear Logic course, Frank has been telling us about Celf, a logic programming language that supports programmer-controlled interpolation of forward and backward chaining. I think of applications of (pure) forward chaining as simulations: the context represents a world or state of some sort, and the programmer describes rules for how pieces of that state evolve, without minding any particular goal. It's "exploratory mode" in a video game. Examples include interpreters/evaluators, Turing machines, and cellular automata.  Backward chaining allows us to describe semantics and search problems; we want to break down a goal into subcomponents right away and only use a hypothesis if we know it's useful to us (i.e. its head is an immediate subgoal). Examples include type checkers and game solvers. But many algorithms -- and I'm not sure if there's a way to precisely classify them -- really want both directions of proof search involved; a graph-coloring algorithm, for instance, has a clear goal, which we can describe through backward-chaining rules that "set up" the problem; but the actual coloring process makes much more sense in forward chaining terms -- and at the end, we can check whether the coloring meets the goal (i.e. no two adjacent nodes are the same color). One controls the switch from backward to forward chaining in the program with a forward-chaining monad spelled {A}. The formula in {} will be searched for only after forward chaining is run to saturation.

A nice consequence of the "proof search as computation" slogan of logic programming is "executions as traces." In particular, the finely-controlled, focusing-based proof search in Celf yields very precise traces that allow easy analysis as a computation graph: a Celf trace is a sequence of let-bindings that match on forward-chaining computation followed by a term at the end representing the goal; we need only track the variables that such a binding depends on and produces, then check a simple condition on those sets to "disentangle" the sequence into a computation DAG.

One thing I wonder (and asked Umut to ponder a bit over coffee) is whether the "inherent parallelism" in Celf might translate to "inherent dynamic computation" for the same reason. What would it take to have Celf-adjusting Computation, as it were?

Tea conversation: hulls and monads


At our weekly departmental Tea, Graphics Student Nico brought up something he'd been thinking about recently in his research: hull operators. Having only heard of hull as in "convex hulls" (which incidentally Umut's talk used for a bunch of examples), I learned that as a more general concept a hull is anything that covers the set of points, e.g. the universe or a bounding box (each of which is less "tight" than a convex hull). Then he proceeded to axiomatize hull operators h:

1) They must be exstensive, i.e. cover the set of points, i.e. X ⊆ h(X)
2) They must be monotonic, i.e. X ⊆ Y ⇒ h(X) ⊆ h(Y)
3) They must be idempotent, i.e h(h(X)) = h(X)

About midway through point two, I was like, "...wait wait wait" and got out a notebook so I could pencil them down and confirm my suspicions: yep, it's a monad over the subset-inclusion category. Wikipedia notes this offhandedly, but I wonder if it has any computational implications. For instance, monotonicity seems related to stability for the problem of calculating the hull.

---

Well, I had two more subheadings that pulled abstract interpretation into my mix of themes (meeting with Patrick Cousot; Carten Schurmann's presentation of "truthful monadic abstractions") but I'm actually about to see Dr. Cousot's distinguished lecture, so perhaps I'll hold off til afterwards!

Comments

  1. Hypothesis Testing
    Define Hypothesis, what is Hypothesis? Define Hypothesis Testing, null Hypothesis,
    http://www.infoaw.com/article.php?articleId=952

    ReplyDelete

Post a Comment

Popular posts from this blog

Using Twine for Games Research (Part II)

Reading academic papers while having ADHD

Using Twine for Games Research (Part III)