What "safety" means to a PL designer

I just finished reading Justus Robertson & R. Michael Young's INT 2013 paper about their work using the fact that players have incomplete knowledge of a simulation to accommodate incongruous player choices -- those that don't meet the story's goals -- by selectively rewriting the past. I made some sketchnotes summarizing the paper.

Of course, if all one wanted to do were to ensure an authorial goal (e.g. that Batman goes to Harvey's location), one could just simply make all seemingly-significant "choices" lead to the same scene. What I find interesting about this work is that it ensures that any railroading of this form is done in a perceptually consistent way, one that won't leave the player wondering how it was possible for the outcome to occur, because what they saw was consistent with their knowledge. There's a world model that led to Harvey being in one place and Rachel being in another, and that world model can change as long as it goes unobserved. The planner will only create changes that maintain this property.

This guarantee -- that the world model's rearrangements don't disturb perceptual consistency -- is, in a PL designer's terms, a form of safety. It's a property of a planning system that can be modeled in a logical formalism and checked with a computer.

The majority of the time, if you learn about type systems and type safety, you learn that it means preventing program crashes and mundane bugs (like applying a non-function). In sophisticated type systems, it might mean ensuring that you never dereference a null pointer or that your concurrent program is deadlock-free.

Because we talk about safety properties like these the vast majority of the time, I think people understandably get an impression of type- and logic-based programming language design as needlessly uptight for creative, freewheeling endeavors like game creation. Jonathan Blow certainly thinks so. It's as though we're suggesting wearing a lifejacket every time you go swimming, when all you wanna do is splash around in the shallow end.

I think the word "safety" is kind of a PR disaster for PL folks. Yes, sometimes we really do have safety-critical systems that we want to ensure have no bugs whatsoever, because actual human lives are at stake. Sometimes we care about the intersection with security & information flow, which affects people's personal information. And of course no one likes it when things crash or we get our code subtly wrong. But must we sell the Big PL Ideas as being fundamentally about never making mistakes, or preventing as many mistakes as possible, as opposed to actively enabling creative work, empowering people to express their design intent in a way that can, for instance, create more compelling game experiences?


  1. I agree. I routinely advise the writers of the kinds of application which concern me (grant applications) that "correctness by construction", as a slogan, doesn't sound all that relevant to the writers of the kinds of application that most people use on their computers. I prefer to send a message about types, used with as little theorem-proving as I can get away with, improving the basic hygiene of our work and enabling more of the executable code to be inferred from the stated design.

    The culture of "type inference" has done a lot of damage: the mistake is to imagine that what we write is what gets run, with types policing us on the basis of minimal annotation. I'd much prefer to write design statements and high-level programming strategies which don't get run at run-time, but allow the executable code to be generated from minimal source in a type-directed way.

    Suit-of-armour safety is the wrong message. Type synthesis is the wrong tool. It's time we left last century behind.

  2. There seems to be one way how a suit-of-armor argument *can* sell types to the industry: test-driven-design (TDD) is really big now, that is, writing lots of unit tests. It's necessary too since otherwise big projects can't keep their quality as they scale up. Now the amount of necessary unit tests demonstrably sinks with increasing strength of the type system. So a strong type system increases productivity!

    Also, types allow for better tooling like improved code completion and semi-automatic refactoring tactics. Many programmers I know love that, because it boosts their productivity.So again, typing increases productivity.

  3. I think part of the problem is that a lot of us (explicitly or not) believe that our concerns as PL researchers are universal. If a language purports to ensure correctness, it should really do that; if it purports to exclude certain kinds of bugs, it should really do that. And if it purports to (quoting the comment above) allow code to be generated from minimal source, the generation had better not introduce fresh bugs itself. So the kind of work we need to do as PL researchers tends to fall around the "no bugs whatsoever" end of the spectrum, which biases us into thinking this is typical of programming in general.

  4. Hi Chris,

    I used to agree with your point very strongly -- I used to say that I wanted to make programming fun, not safe --- but over the last year or two I have become much more conventional in my views. Basically the recent NSA spying revelations have convinced me that the combination of spying, computer science and advertising have turned our society into a total-surveillance police state, and so safety (and I mean actual safety for actual humans) is a much more pressing problem than I thought it was.

    Obviously, formal verification has some chance to hinder the NSA's hack-all-the-things policy, but more surprisingly, it's Twitter that convinced me that CS techniques have some hope for improving the real safety of real people. Basically, techniques like differential privacy have the "burn the database" problem -- once you've performed a certain number of queries then you can't access the database without breaking security properties. However, Twitter is a *streaming* service, so stream queries can constantly renew themselves, without breaking either security or hitting a burn-the-database state.

    1. Ok, you have a point. I guess I should be careful not to suggest that the common meaning of safety *shouldn't* be at all related to programming language safety, as the former is definitely an important application of the latter! It still, however, seems important to *distinguish* the ideas, so that "safety" in an expressiveness sense isn't mistaken for the much more critical notion of "safety" in a harm-to-humans sense.

  5. Yes! In fact, PL safety can actually be opposed to real safety: for example, if we made all tablet/phone OS's memory and capability safe, then this would be more PL-safe. But it would be less safe for the people using it, since there would now be no way to escape the jailed garden.

  6. Did you know that you can generate money by locking premium pages of your blog or website?
    To begin just join AdscendMedia and run their content locking tool.


Post a Comment

Popular posts from this blog

Using Twine for Games Research (Part II)

Reading academic papers while having ADHD

Using Twine for Games Research (Part III)