Automatic Asymptotic Analysis

I recently tried to have a research conversation over Twitter -- always a questionable choice -- with an algorithms researcher, spurred by my comment that I was surprised to be able to find no work (at a cursory glance) on automatic deduction or checking of asymptotic runtime analysis.  After all, went my thought process, when a TA looks at the solutions to my Graduate Algorithms homework and determines that the pseudocode meets the runtime signature specified in the assignment, a certain amount of it is entirely mechanical.  In Constructive Logic at CMU, we use Tutch to give students automatic feedback on natural deduction proofs -- something that's very simple to check by hand but which can be learned much more quickly with a rapid sequence of alternating attempts and feedback.

The comment was met with skepticism, which I eventually decided to consider warranted at least to the extent that we want to a) perform analyses that require constructing adversarial input and b) infer rather than check asymptotic bounds.  But I'm still not convinced of the fundamental claim that the subset of algorithms we could successfully use inference for has only a tiny overlap with what would be useful to automatically check.  It feels archaic to me that we're still writing pseudocode and doing analyses on paper -- at the very least, couldn't we formalize the analyses we construct?  Maybe algorithms researchers don't care, but as a student of algorithm design and as a programmer, I think I would find even primitive analyses pretty useful to do automatically so that I when I tweak code, I know when I break the performance signature.  Do we have different goals, or do I still have unrealistic expectations?

Here are some examples of related work from the PL side that suggest maybe this is not so farfetched:


  1. Also (I forgot this reference the second time we talked) - for an automatic but less expressive version of "on the complexity analysis" you should see From datalog rules to efficient programs with time and space guarantees.

    Again, even in the cost-semantics-for-logic-programs settings, what's lacking is the thing in the middle - a fully mechanical way of allowing the computer to check your work - it's "prove on paper" or "infer automatically."

  2. Hi Chrisamophone, You might be interested in Fran├žois Pottiers slides on 'Types for complexity-checking' at

  3. id: I just read this comment now and skimmed the slides -- this looks great! Thanks!


Post a Comment

Popular posts from this blog

Using Twine for Games Research (Part II)

Reading academic papers while having ADHD

Using Twine for Games Research (Part III)