Skip to main content

OBT 2016: Operationalizing Creative Theories

Below are the slides for my keynote at Off the Beaten Track (OBT) 2016, co-located with POPL:



Since my preference for sparse slides makes this deck difficult to follow sans soundtrack, I'd like to recap some of the ideas I presented. If you'd like to open the slides in a new tab, here they are on Speaker Deck.

I titled the talk "Operationalizing Creative Theories" before I knew what the talk was going to be about, so it's not a perfect title. I initially picked the word "operationalizing" in the sense of Eger et al.'s "Operationalizing the Master Book of All Plots," which I like very much as the verb referring to the Generative Methods idea of "taking a formal, declarative specification of what a creative domain is made up of and turning it into an algorithm for constructing instances of that domain."

I talked a little bit about operationalization, but mostly I talked about interrelations between formal logic and the concepts of world simulation and generativity in games. The first few slides allude to Dave Walker's POPL keynote on Research Confluences, instantiated at the confluence of POP methods (computational logic) and EIS methods (expressive game design). (Since EIS is pronounced "ice," it just occurred to me that I missed an opportunity to make an ice-pop pun. C'est la vie.)

Text to the accompany the slides after the introduction follows.



Imaginative Play

The first connection I describe is one between hypothetical reasoning and imaginative play. Several people before me have noticed the similarities in cognitive function: supposing facts that one doesn't know to be true in order to make inferences about alternate universes, such as fantasy worlds in stories, bears formal similarity to the kind of abstraction one does in logic. By formulating settings in which things need not be true in reality to make sense, we get the magic circles that enable both programming and play.

Creating a sense of magic in digital games often relies on combinatorial techniques, such as inventories, verb/noun interfaces, and crafting systems to give the player a sense of agency, as though their own imagination is the limit when it comes to conjuring responses from the world. At a more minimal level, the concept of emergent behavior is embodied canonically by systems like cellular automata that are specified in terms of three simple rules through which all of computation may be rediscovered. Coming up with game systems that are emergent in this sense has seen success primarily in text-based games, and it is enabled today largely by the advent of new programming languages, such as Inform 7, which leans heavily on relational and rule-based language design.

Other referents in this section:
Legend of Kyrandia, Book II, in which the player conjures potions by following a spellbook, and which items in the world satisfy which ingredient is somewhat flexible.

The more-usual suspects: Monkey Island, Minecraft, Conway's Game of Life, Zork

Collaborative Play

When humans play games with other humans, the source of magical-feeling responsiveness can come instead from the complicated, unpredictable creativity of other humans. Tabletop roleplaying games like D&D are an especially systematized approach, but simpler examples include surrealist games like Exquisite Corpse and Mad Libs. All of these techniques depend on information hiding, where contributors to the artifact must create their piece of it without knowing exactly how it's going to fit in with the other pieces. Only after everyone has committed to their contribution do their see how it's all assembled, and the structure of the game may ensure more or less coherence to the combined result.

Collaborative play also has the potential to blur the line between system [or its enforcer, who is in control] and human [who is at the system-enforcers' mercy]. By having humans mutually enact and enforce a system, more symmetry is possible. Peter Suber's Nomic is an example of a collaborative game where each player can modify the rules out from under one another.

Generative Methods

Generative methods, or as it's sometimes known when applied to games, procedural content generation (PCG), can be seen as a way of incorporating some of the ideas from collaborative play into digital games: use algorithms and constrained affordances to produce novel configurations of elements that make up a game level/piece of music/painting/line of dialogue/et cetera.

In Gillian Smith's paper An Analogue History of PCG, she uses the term "design for recombination" to describe a principle of (bottom-up, at least) generative methods techniques: come up with a set of elements (analogous to tiles/lego bricks) that can be used to define lots of (perhaps infinite) instances of your domain. Examples include the interchangeable lines of Oulipo's Cent Mille Milliard de Poèmes, the modular tiles in Dungeon Geomorphs, the wardrobes of paper dolls (and online avatar creators), and, perhaps most canonically, Lego bricks. Another example (that I didn't call out in the talk at the time) is syntax trees for programming languages, which in some cases (e.g. lambda calculus) are tiny lexicons together with rules for assembling them in infinitely many different configurations.

In Programming Languages, we have another word for "design for recombination:" compositionality. We get the concept from Frege, who formulated it as something like, "the meaning of any complex thing can be determined by the meaning of its parts." That is, for some notion of meaning and some combination operator *, we can understand the meaning of A * B by independently understanding the meanings of A, B, and *.

The hypothetical judgment I mentioned in section one, through combination with substitution, obeys compositionality (which is embodied in a Cut or Substitution principle).

Another way of understanding compositionality is that a component has the same meaning regardless of its context/other things it might be combined with. That principle should be more familiar to people who work with bottom-up PCG techniques like production grammars: it's exactly the condition ensured by context-freeness.

Narrative makes a particularly interesting case to study for generative methods and compositionality. Naïvely, we might conclude that bottom-up algorithms for constructing narrative are impossible, because narratives aren't compositional: you can't take Crime & Punishment and Alice in Wonderland wholesale and claim that their sequential juxtaposition is a new, well-formed story. 

But of course, everything depends in how we decompose a domain into pieces and how we allow ourselves to recombine them. If we make combination partial, and specify an interface to components that restricts when we can combine them (and requires that the resulting interface of the combined thing be defined), more possibilities for compositionality emerge. Examples include:
  • Emily Short & Richard Evans' Versu, in which a story component is a character, and together with a very rich notion of how sets of characters interact, these components can be combined to produce a story.
  • Aaron Reed's 18 Cadence, in which a story component is a (year, room) coordinate at the address of a particular fictional house, which Reed has populated with an ever-changing cast of inhabitants with changing lives.
  • My extremely simplified and formalist account of simulated character interactions in linear logic, which (as a tradeoff) has a very rigorous account of compositionality in terms of linear logic's proof theory. The necessary meaning is encapsulated by the equational theory of symmetric monoidal categories.
One commonality between the latter two examples is their lack firm commitments about the distinction between specific authored experience and tool or method for creating stories. Reed described 18 Cadence as a "story kit," and the example I presented in my paper was of what I called a story world, from which many potential stories could be generated. This notion of a computational narrative not as telling specific stories but rather providing a palette for collaborative storytelling with the interactor, shares Oulipo's name's reference to "potential literature" (as interactive fiction scholar Nick Montfort noted in his book Twisty Little Passages).

In general: generative methods techniques often lend themselves to collaborative play. One outcome of this fact is the development of mixed-initiative design tools where, rather than a human controlling a system to author a work, and rather than an algorithm producing a work in a completely automated fashion, both the human and program can take initiative to suggest, explain, and/or critique designs. The example I give in my slides is Julian Togelius's lab's mixed-initiative level design tool, Sentient Sketchbook.

Tool Design

I conclude with the takeaway that compositionality and usability are mutually beneficial, with Kate Compton's Tracery as a wonderful positive example, and Ceptre as a shameless self-promotional example. (I briefly mention Twine as a non-example, citing lack of compositionality as the source of most of its limitations.) For the slide with SHRDLU on it, I posed the question, "what if we took the idea of a REPL further and designed mixed-initiative programming environments that were more like being in conversation with a virtual agent?"

Conclusion

OBT is Obviously the Best Track, at the confluence of PL and various novel domains. At Different Games 2013, Clara Fernandez-Vara discussed how all the weirdo, artsy, non-mainstream, expressive play ideas being presented there were the future of games, largely because they come from a more diverse set of developers interested in expressing a broader range of human experience through play. Similarly, given the uncharted territory that we're exploring with interdisciplinary methods that don't often find a home at mainstream conferences, I suggest that OBT is the future of PL research.

Acknowledgements & Further Reading

A couple of things that I didn't include as explicit references in the talk, but that helped me structure the ideas presented therein:



Comments

Popular posts from this blog

Using Twine for Games Research (Part II)

This preliminary discussion introduced my thoughts on using Twine as a tool for creating prototypes for games research. I'll start with documenting my first case study: a hack-and-slash RPG-like setting where the player character has a record of attributes ("stats") that evolve through actions that turn certain resources (money, health, items) into others. I've selected this hack-and-slash example because it falls outside the canonical "branching story" domain thought to be Twine's primary use case, but it is not too much trickier to implement. It relies crucially on the management of state in ways that simple branching stories would not, but it does so in a fairly straightforward way.

If all goes well, this post may also serve as a tutorial on the "basics" of Twine (links + variables + expressions). In particular, I'll be using Twine 2/Harlowe, and I haven't seen many tutorials for this new version published yet.

To me, the main "…

Using Twine for Games Research (Part III)

Where we last left off, I described Twine's basic capabilities and illustrated how to use them in Twine 2 by way of a tiny hack-and-slash RPG mechanic. You can play the result, and you should also be able to download that HTML file and use Twine 2's "import file" mechanism to load the editable source code/passage layout.

Notice that, in terms of game design, it's not much more sophisticated than a slot machine: the only interesting decision we've incorporated is for the player to determine when to stop pushing her luck with repeated adventures and go home with the current spoils.

What makes this type of RPG strategy more interesting to me is the sorts of decisions that can have longer-term effects, the ones where you spend an accumulation of resources on one of several things that might have a substantial payoff down the road. In a more character-based setting, this could be something like increasing skill levels or adding personality traits.

Often, the game-…

Why I don't like the term "AI"

Content note: I replicate some ableist language in this post for the sake of calling it out as ableist.

In games research, some people take pains to distinguish artificial intelligence from computational intelligence (Wikipedia summary), with the primary issue being that AI cares more about replicating human behavior, while CI is "human-behavior-inspired" approaches to solving concrete problems. I don't strongly identify with one of these sub-areas more than the other; the extent to which I hold an opinion is mainly that I find the distinction a bit silly, given that the practical effects seem mainly to be that there are two conferences (CIG and AIIDE) that attract the same people, and a journal (TCIAIG - Transactions on Computational Intelligence and Artificial Intelligence in Games) that seems to resolve the problem by replacing instances of "AI" with "CI/AI."

I have a vague, un-citeable memory of hearing another argument from people who dislike the…