Skip to main content

Automatic Asymptotic Analysis

I recently tried to have a research conversation over Twitter -- always a questionable choice -- with an algorithms researcher, spurred by my comment that I was surprised to be able to find no work (at a cursory glance) on automatic deduction or checking of asymptotic runtime analysis.  After all, went my thought process, when a TA looks at the solutions to my Graduate Algorithms homework and determines that the pseudocode meets the runtime signature specified in the assignment, a certain amount of it is entirely mechanical.  In Constructive Logic at CMU, we use Tutch to give students automatic feedback on natural deduction proofs -- something that's very simple to check by hand but which can be learned much more quickly with a rapid sequence of alternating attempts and feedback.

The comment was met with skepticism, which I eventually decided to consider warranted at least to the extent that we want to a) perform analyses that require constructing adversarial input and b) infer rather than check asymptotic bounds.  But I'm still not convinced of the fundamental claim that the subset of algorithms we could successfully use inference for has only a tiny overlap with what would be useful to automatically check.  It feels archaic to me that we're still writing pseudocode and doing analyses on paper -- at the very least, couldn't we formalize the analyses we construct?  Maybe algorithms researchers don't care, but as a student of algorithm design and as a programmer, I think I would find even primitive analyses pretty useful to do automatically so that I when I tweak code, I know when I break the performance signature.  Do we have different goals, or do I still have unrealistic expectations?

Here are some examples of related work from the PL side that suggest maybe this is not so farfetched:


  1. Also (I forgot this reference the second time we talked) - for an automatic but less expressive version of "on the complexity analysis" you should see From datalog rules to efficient programs with time and space guarantees.

    Again, even in the cost-semantics-for-logic-programs settings, what's lacking is the thing in the middle - a fully mechanical way of allowing the computer to check your work - it's "prove on paper" or "infer automatically."

  2. Hi Chrisamophone, You might be interested in Fran├žois Pottiers slides on 'Types for complexity-checking' at

  3. id: I just read this comment now and skimmed the slides -- this looks great! Thanks!


Post a Comment

Popular posts from this blog

Using Twine for Games Research (Part II)

This preliminary discussion introduced my thoughts on using Twine as a tool for creating prototypes for games research. I'll start with documenting my first case study: a hack-and-slash RPG-like setting where the player character has a record of attributes ("stats") that evolve through actions that turn certain resources (money, health, items) into others. I've selected this hack-and-slash example because it falls outside the canonical "branching story" domain thought to be Twine's primary use case, but it is not too much trickier to implement. It relies crucially on the management of state in ways that simple branching stories would not, but it does so in a fairly straightforward way.

If all goes well, this post may also serve as a tutorial on the "basics" of Twine (links + variables + expressions). In particular, I'll be using Twine 2/Harlowe, and I haven't seen many tutorials for this new version published yet.

To me, the main "…

Using Twine for Games Research (Part III)

Where we last left off, I described Twine's basic capabilities and illustrated how to use them in Twine 2 by way of a tiny hack-and-slash RPG mechanic. You can play the result, and you should also be able to download that HTML file and use Twine 2's "import file" mechanism to load the editable source code/passage layout.

Notice that, in terms of game design, it's not much more sophisticated than a slot machine: the only interesting decision we've incorporated is for the player to determine when to stop pushing her luck with repeated adventures and go home with the current spoils.

What makes this type of RPG strategy more interesting to me is the sorts of decisions that can have longer-term effects, the ones where you spend an accumulation of resources on one of several things that might have a substantial payoff down the road. In a more character-based setting, this could be something like increasing skill levels or adding personality traits.

Often, the game-…

Why I don't like the term "AI"

Content note: I replicate some ableist language in this post for the sake of calling it out as ableist.

In games research, some people take pains to distinguish artificial intelligence from computational intelligence (Wikipedia summary), with the primary issue being that AI cares more about replicating human behavior, while CI is "human-behavior-inspired" approaches to solving concrete problems. I don't strongly identify with one of these sub-areas more than the other; the extent to which I hold an opinion is mainly that I find the distinction a bit silly, given that the practical effects seem mainly to be that there are two conferences (CIG and AIIDE) that attract the same people, and a journal (TCIAIG - Transactions on Computational Intelligence and Artificial Intelligence in Games) that seems to resolve the problem by replacing instances of "AI" with "CI/AI."

I have a vague, un-citeable memory of hearing another argument from people who dislike the…