As if people matter

I've been wanting to respond to this essay lamenting the disconnect between PL popularity and PL research since it started making the rounds earlier this week, and I've wanted to write a post about what I think PL theory and interaction design have to do with one another for even longer; possibly I can tie these thoughts together in a single post. [WARNING: much fuzzy speculation that lack concrete substantiation herein, but I wanted to put these thoughts together before they dissolve into my subconsciousness any further...]

"Programming languages as if people matter"

The larger context for these thoughts is the ongoing battle I have been mentally cataloging for years (and which certainly predates my involvement in academic research, so I'm bound to say some misinformed things -- please correct me!), the frequent and frustrating friction between researchers in type theory and logic (in my terms "PL theory") and those who criticize such work for focusing on problems rather distantly removed from human practitioners and their needs (in my terms "HCI" and "software engineering").

At first I had a habit of confounding this issue with theory vs practice. That's a significantly more tired argument, and I'm not particularly interested in having it: let's take as given that we consider theory for the sake of theory valuable on the basis of specialization/separation of concerns -- humans have different interests and abilities, and as a whole we should expect to make more progress by working on the ideas that compel us and suit our talents. (I'm perhaps ignoring a significant portion of crista's post by taking this for granted, but it's a wide-ranging, and there are bits of it making different arguments that I'm more interested in -- such as the opening question about "what's left" to PL research. It's my aim to get to that by the end.)

But there's more going on here than theory vs practice, because even if we stick to theory and pure academic research, there's really an awful lot to say about design, cognition, and usability. The real question I'm interested in here is what that theory and PL theory have to do with each other.

I owe a lot of my philosophical grounding on the nature of programming languages research to Bob Harper, who has recently been wielding the slogan "Programming Languages As If People Matter" to address this argument. Because I'm not as eloquent, my summary will be rather crude, but thanks to Twitter I do have one direct quote: "[Type theorists] *are* doing user studies; we're drawing on millennia of human experience in mathematics." The "millennia of human experience" he's referring to is the practice of logic, which has its roots in philosophy attempting to understand that very thing, human cognition, that critics claim PL ignores. By doing PL with logic, we are tapping into the very roots of the study of human nature; thereby, "Programming Languages As If People Matter."

Now, don't get me wrong: I'm quite fond of my field's roots in philosophy and I appreciate Bob's recognition of its relevance to science. However, I take issue with trivializing empirical methods by equating them to "drawing on experience". Experience is not experiment, and it's scientifically (philosophically!) sloppy to even draw analogy between the two. While Logic's initial project was to explain human cognition, there have since been some pretty compelling refutations of the usefulness of it in that domain, along with the pretty awesome silver lining discovery that it is useful for explaining computation! So from my point of view, logic is important in the design of PLs because (to compress history rather lossily) it takes our pristine, aesthetically pleasing idealization of human cognition and allows us to express ideas from our messy, error-prone brains in terms of it.


Design as if people matter


Don Norman wrote a book called The Design of Everyday Things. It's a wonderful book. It discusses the manufactured objects humans interact with mundanely (doors, alarm clocks, thermostats) and explains common failures of their design to correspond with well-known human cognition. It describes useful notions like affordance (a feature that suggests a particular action, like a handle suggests pulling) and natural mapping (the physical layout of interactable features corresponding with the mental model of their function, like stove burner controls matching the layout of the burners), and immediate feedback, backing each up with the cognitive science that drives their utility. It criticizes designs that win award for dazzling aesthetics and laments that the basic scientific principles discovered in academia to give a metric for "good" design don't frequently make it into commercial products. Sound familiar?

The part where my own pre-formed opinions about design start to conflict with Norman's is when he starts describing computer interfaces. I could make an entirely separate post about them, and it'd be entirely too much of a tangent to describe them here (though I suppose I can summarize by saying I'm a unix weenie who prefers vim and keyboard commands to IDEs and mice). The point of my bringing it up at all is that I diverge at exactly the point where he starts trying to apply ideas about the Design of Everyday Things to something that is probably best understood as not an Everyday Thing. In his own words:

What are not everyday activities? Those with wide and deep structures, the ones that require considerable conscious planning and thought, deliberate trial and error: trying first this approach, then that---backtracking. Unusual tasks include writing a long document or letter, making a major or complex purchase, computing income tax, planning a special meal, arranging a vacation trip. And don't forget intellectual games: bridge, chess, poker, crossword puzzles, and so on.
Everyday-thing-design still has a place in such activities, though:
To play the piano, we must move the fingers properly over the keyboard while reading the music, manipulating the pedals, and listening to the resulting sounds. But to play the piano well, we should do these things automatically. Our conscious attention should be focused on the higher levels of the music, on style, and on phrasing. So it is with every skill. The low-level, physical movements should be controlled subconsciously.
One of the things I quickly noticed in my first-ever software development internship was that I hated programming in C++ because I felt like all my intellectual energy was going into manipulating my tool---the language---rather than using it to explore and express my higher-level ideas.

But when I took a course on "human aspects of software development" to try to understand this frustration more scientifically, it felt like the emphasis was in all the wrong places---the field seemed to take Norman's message and pervert it into the idea that what can't be treated as an everyday thing just isn't worth studying; anything that isn't instantly discoverable or learnable in one lab study is an idea worth discarding. This means that their field doesn't appear to get much further than surface syntax and editing tools when it comes to understanding PLs. And I mean, is that okay? Is the rest of it just too "wide and deep" of a problem to understand on design terms? I don't think so, I say cautiously based on personal experience programming in different languages combined with the deep understanding I have through theoretical training---there are meaningful differences that have nothing to do with my editor or syntax, and we will probably not be able to use the same methodologies that we use to examine everyday things, but we shouldn't pretend that these differences have nothing to do with design.

CLAIM: types are design tools.

I'm personally quite interested in the interaction of theory and design. How can they enable each other? Two particularly poignant examples are REPLs and Agda's emacs mode. REPLs are Norman-approved with their quick feedback mechanism. Agda does a similar thing, but with the power of dependent types helps you construct a program by supplying you with skeletons for the cases of a function you need to write; if further pattern matching unifies two variables, they are uniformly replaced in your program and it gets even shorter. This is taking the rather compelling idea that stronger types lead to shorter programs to the next level; the programs can be not only shorter but more interactively developed. ("Security" and "reliability" and "debugging" are all well and good, but to me, these are the real reasons type theory is exciting!)

In the other direction of influence, we can take seriously the idea that people want to e.g. write GUIs or program the web and develop new theories/models of computation that try to make sense of the happenstance of human invention.

Or, interpreting the influence of theory on design another way, what if we think about interfaces as language problems? What's the linguistic abstraction offered by a REPL? Or a drawing program with brushes and palettes? Or a musical instrument, or a toaster? Could we understand these tools better if we thought about them in terms of logically-informed languages?

In other words, what we do as PL theory researchers is design vectors for understanding. In that sense, I "cannot rightly apprehend the confusion of ideas that would provoke such a question" as "Is there still research to be done in Programming Languages?" There will be PL research to be done as long as there is computer science research to be done; as long as any technological innovation continues. As long as there are new ways to interact with the world.

All that said, Crista does ask a good question: how do we evaluate PLs?

She says: performance, productivity, verifiability

Maybe so, if the PL is a language in the sense of a full implementation including libraries, ecosystem, and community.

But what about theory? Well, we can prove theorems. We can use the mathematical criteria of (any/all of local, internal, or semantic) soundness and completeness---local reduction and expansion, cut/identity, correspondence to category theory. That's one way.

But in the sense that theory is also design, we want to evaluate it in terms of the abstractions offered and expressivity enjoyed. Perhaps those things are impossible to quantify, and perhaps for now we can allow such qualities to be evaluated subjectively---but maybe there's a science behind it; maybe existing science can apply here, or maybe we need something radically different, but what we can't do is ignore that these are design criteria and thereby ignore existing fundamentals in design.

Comments

  1. My interpretation of "[Type theorists] *are* doing user studies; we're drawing on millennia of human experience in mathematics" is not the practice of logic per se, but rather the communication of *understanding* through mathematics and proof. This is communication that is often helped by abstraction and expressivity, and there seems to be good reason to believe that evolutionary pressure on modes of mathematical thought trend this way.

    It seems right to remark that experience is not experiment. Even evolution, which is a science which does not have the opportunity to conduct directed experiments, requires evidence and justification in its appropriation of data. It would be an interesting historical exercise to see what mathematics lived, and what mathematics died, and whether or not this had any relationship to abstraction and expressivity.

    ReplyDelete
  2. I think this gets at the question, "Does mathematics need external validation?" The idea that it doesn't is only a fairly recent meme, and although it has been an incredibly powerful meme---see this AMS Notices article, A Revolution in Mathematics? What Really Happened a Century Ago and Why It Matters Today---I think that ultimately it's unsustainable. As you say, we can't ignore the fact that all of these things (logic, types, ...) are just conceptual tools for understanding/manipulating our world. We want to be able to evaluate these tools, and refine/replace them when they no longer serve their tasks. On the other hand,
    I think that as PL theorists we are pretty confident that at least a few of our core tools will be able to survive such evaluation for some significant amount of time, since in the current state-of-the-world they often give us a competitive advantage.

    Also, though I wasn't there, I think that in some ways this debate about PL theory ("versus" HCI/software engineering, but also about PL theory within computer science more generally) is similar to the debate about category theory within mathematics. Category theory was and still is dismissed as being too far-removed from "ordinary" mathematics. The difference with PL theory is that category theory has spent more concentrated effort refining its tools, and that slowly and quietly these tools have begun to see wide-spread usage. The danger here is that the process is too quiet, in particular to funding agencies.

    ReplyDelete

Post a Comment

Popular posts from this blog

Reading academic papers while having ADHD

Using Twine for Games Research (Part II)

Using Twine for Games Research (Part III)