Ask us Anything
Making the Hunt
We started working on the hunt in September, and decided on the theme and mechanic well before the Mystery Hunt. We were surprised to see that mechanic (of ambiguous meta pairings) in there -- but it was nice to know that another writing team thought the idea was worth pursuing! Note that the 2017 Galactic Puzzle Hunt also involved assigning answers to metas, and we weren't the first hunt to use this gimmick then either.
The Archivists started as Patrick Xia's idea, and was iterated on several times. Initially the mnemonics spelled out answers and not puzzle titles, but that was extremely backsolvable. We changed that, and then after a second testsolve we reordered the objects so that they were no longer in groups, and then after the third we added the panorama. All of these changes came from group brainstorms.
The Artists came from a group brainstorm and went through a lot of different iterations with different groups.
The Animal Trainers started out as Alan Huang's idea and went through a few different brainstorms with different groups before it got to where it is.
The Astrologers was written by Lennart Jansson.
The Artifact was Anderson Wang's riff on a meta type that several groups had come up with different versions of already.
One meta idea was thrown out from each of the intro and main rounds -- though we don't want to talk about how they work, in case they become puzzles in a future GPH. The intro meta (internal codename "headless chicken") evolved from a pair of ideas "chicken" and "turkey". More about this can be found in the author notes for that puzzle.
We're strongly in favor of puzzles that can be collaborated on remotely, and we think most of ours work well for that use case. In fact, almost all our testsolving was done by remote groups, usually with Google Sheets plus Discord voice chat. There were only a few this year that we anticipated would be harder this way, such as The Farmer's Dilemma. Other puzzles we saw people describing as harder remotely are Peaches and Race for the Galaxy, but we felt that with appropriate communication protocols the impact of being spread out can be minimized for these puzzles. For Peaches, our testsolvers judiciously used spreadsheets to solve and record complete solutions to all the subpuzzles before entering them into the game; for Race for the Galaxy, they communicated via voice chat and progressed together as soon as the first individual solved the particular subpuzzle.
Puzzles explicitly geared toward collaboration, as opposed to just being no worse for remote solvers, are an interesting idea and might show up in the future -- but no promises!
The goal of creating and using a constructed language was among the first things we decided on for this year's hunt. Incorporating aliens was a natural choice from there. The Antarctic flavor was decided on as a "shell" plot a few months later; our main artist, Lennart Jansson, posted a mockup and everyone was on board. We felt it both provided an appropriate layer of mystery and let us visually differentiate this year from our previous hunts. We also wanted to avoid a traditional archaeological dig setting, as an archaeology expedition leading to alien contact had already been explored in DASH 9.
You'll have to wait until next year to see! That said, we typically try to avoid puzzles where not knowing something ahead of time would put solvers at a big disadvantage.
We would be excited to run the MIT Mystery Hunt, but will only make concrete plans if we are actually scheduled to write one. In general, we highly value twists, surprises, and new ideas at every level of a puzzlehunt. If we ever have the chance to write Mystery Hunt then whatever we produce would likely reflect these goals.
Most of us are in the United States, but we have team members on both coasts (and some on the other side of the world) and with wildly differing sleep schedules. East coast early birds take over from Asia and West coast night owls around 9-10 AM Eastern.
26 unique people answered at least one adviceberg, and 16 unique people answered at least 50. Usually between 1 and 5 people were actively answering hints at any given time. Special thanks go to Yannick Yao, Charles Tam, Patrick Xia, and Ben Yang for answering over 500 hints each.
We started up a Discord server shortly after last year's hunt concluded, with sporadic discussion until regular meetings started up in the fall. The overall theme was settled in early October. From then until January, the basics of the conlang, such as phonetics and word order, were hammered out, as were the meta structure and hunt progression. Regular puzzle construction opened up around the new year, progressing in parallel with conlang vocabulary and grammar. There was an early burst of theme work to get the website ready to announce at Mystery Hunt, but most of the final decisions were made in March.
For a "fairly standard word puzzle", it's probably a few hours of brainstorming, anywhere from an hour to 10-20 hours writing clues, and then testsolving, iterating until it's fun and elegant, and, if necessary, designing the page so it doesn't look like a boring list of clues.
For programmable stuff the "writing clues" stage above is replaced by "writing code". Unsafe took maybe 50 hours to make.
Peaches is a special case because in addition to programming, there was a lot of art to draw. There were about a dozen comic panels, two dozen unique character sprites, various backdrops and UI. The art took maybe 100 hours, and the coding took around 30 hours. A lot of features (like the dialog box) were much made easier by a library Nathan wrote. Coming up with the powers and levels that worked for the extraction also took a week of mulling.
The Wepp Perflontus Bakeoff was probably the biggest direct inspiration, but we've had several theme proposals about alien contact for the past few years. Several semi-independent lines of thought, including "what if the puzzlehunt taught you a language" and "secretly friendly aliens but you misheard them" converged to the overall conlang theme.
Most of the meta ideas came out of group brainstorming sessions. People would split into groups and think about different ways to construct interesting puzzles with language. Once we had something good, we would iterate on it repeatedly until it was something we liked.
We've updated a lot of solution pages to add additional notes on construction. A few more comments on individual puzzles follow:
Brian: I think I go over my inspiration in the authors' notes for Unsafe, but in a sentence, I like text adventures and feel like too few "straight" text adventures appear in puzzlehunts — they usually have global gimmicks and feel like they need to be played rather differently than a normal text adventure.
DD: For Peaches —
- September 2018
- DD: Nathan we need to write a bowsette game
- Nathan: Oh my god great idea
- (five months pass and we don't do anything)
- February 2019
- Through lots of iteration and frustration, find excuses to get as many -ettes into a puzzle as possible
Nathan: Word Search was received warmly last year, especially by newer teams, so we wanted to make another puzzle in that vein. With both Word Searches, we were looking to make "comfort food" puzzles — nothing experienced teams will go crazy over, but hopefully a fun and relaxing way to spend a few hours for anybody else.
Puzzles were written when the authors had an idea that they liked. We enforced a few other light rules (such as "an author can't have two puzzles in development at the same time"), but didn't direct puzzle creation other than that. A puzzle goes through various stages of revision (initial idea, development, testsolving, revising, post-production, factchecking), each of which can take varying amounts of time depending on everyone involved, so the amount of time it takes to write a puzzle can be unpredictable. We had an internal deadline of testsolving around three weeks before the hunt; while it was not strongly enforced, most puzzles met this deadline.
Many intro round puzzles (such as Polar Sales and Courage and Purity) were among the first to be completed. Two of the latest puzzles to be written were Puzzle of Dragons and Observatory (the latter replacing a puzzle that had recently been cut).
We start with the metas first, as it is very difficult to write an engaging meta around phrases that are not related in some way.
Each meta unlocked when a team solved all but 2 puzzles that correspond to that meta. We tried to be somewhat obscure with this mechanism on purpose: since teams had to figure out the correspondence between puzzles and metas, we didn't want them to (correctly) assume that the most recently solved puzzle corresponded to the meta they just unlocked. We realize that this wasn't an ideal solution and we're sorry for any hardship this caused.
We don't have a specific plan to make procedurally generated or interactive puzzles, but we find that in many cases these types of puzzles are the best way to deliver really fresh and creative ideas.
Usually we have a few people "on call" at a given time to answer the hints. Most of us are familiar enough with most of the puzzles to answer hints about them. The authors of each puzzle also prepare short guides for the rest of us to help answer common hint requests. Once the hunt has been going for a while, past hint responses become a good resource to help unfamiliar people give hints on puzzles.
Across the team, we're aware of most of the common tools, and try to write puzzles with them in mind. Editing and testsolving usually catch cases in which different tools might lead to different results. It's not perfect, of course.
We (ab)used HTML ruby annotations — commonly used for rendering the pronunciation of East Asian characters — to record teams' English translations of Puflantu words. These tags have nothing to do with the Ruby programming language. Our website is written in Python and built, as you say, with Django.
All of us produce the hunt as a hobby and because we think it's worthwhile, so we prefer not to think about it too much! We think the number of hours spent per person hasn't gone up much since last year (with the exception of the language developers), but we do have a larger writing team than last year's, so the overall number of hours has likely gone up a little. From 2017 to 2018, the amount of work each person put in increased drastically.
Jakob Weisblat served in an organizational role this year, in terms of running meetings and keeping things rolling, but most hunt design choices were either by broad consensus, or by a small ad-hoc group that decided to take on that task. Editors were in charge of their respective puzzles. This structure had advantages and disadvantages, and we may explore a different organizational structure next year.
Yes! It is here.
The bell sounds!
Not entirely sure, but lots of bells are involved.
About a month before hunt, Rahul lamented the fact that we currently had no puzzles that autoplayed Rock Lobster. While brainstorming what such a puzzle would look like, he came up with the main idea behind Ten Years Later: "let's do Overtime, but with a fake hunt." At that point the only answer left was THE LION THE WITCH AND THE WARDROBE, and the shortest cluephrase we could come up with involved 13 letters. We put out a request for silly mini-puzzles and ended up with a lot of great ideas which contributed to the long author list. Since this was an intro round puzzle, we optimized for humor instead of meatiness; we hope you got a good laugh out of some of them.
If you are logged in, we track when you re-encounter the 2009 site. After you regain the memories of our wonderful 2009 hunt, the link appears on the archive page.
We think that the only puzzle we received was a recording of The Llama Resolution's hint request. They submitted a hint request in meter, set to Yakko's song, and we were so impressed that we asked for a recording! They sent us tracks titled air, earth, water, and fire. We're still trying to databend the final recording! Other than that, the closest we got to a puzzle was a puzzle-ish representation of pi from Foggy Brume of TeamName (and of P&A magazine). He sent us a song about how physics classes make him want to drink, in which each word length is a successive digit of pi.
Yes, we sent a transcript to anyone who requested one.
Chris: I've updated the author's notes with a little backstory.
DD: The puzzle was designed by starting with finding a reasonable set of powers that could transform SKILIFT into GORGEOUS CAKE, which took a lot of trial and error. I would have liked to use each power exactly once, but had to settle for Wiggler being used twice in order for the powers to be reasonable enough to build the levels. (Adding AT LEAST to the clue phrase also forced the game to be a bit long.) Generating levels was done by using epicurious's recipe API to get a word pool, and then running a search over all orders of operations going backwards from words/phrases from the word pool to a list of nouns. The number of times Bowser's power could be used was limited to limit the search space and to make sure the puzzles weren't degenerate (i.e. burn all the existing letters, then make a new word from scratch.) I then tested generated puzzles by hand to pick levels that were interesting logically.
(See the authors' notes for a few more technical details.)
The Peaches bug was caused by an anti-cheating mechanism built for source-divers. Unlike the other levels, the question mark level requests an image from our server, sending as parameters the list of all moves used to unlock all levels. The server then validates the solutions for all the levels before returning the image. If validation fails, no image is returned, and the level does not open. The issue was caused by a last-minute bug fix in the client-side code that was not fixed in the server-side code. In particular, Bowser's power removes all spaces in addition to removing the alphabetically first letter in the server-side code, but doesn't remove spaces in the client-side code. Teams that used a particular order of operations on the ANISE TAHINI level got hit by this bug. The way this bug manifested was very unfortunate, and we'll be sure to give teams an obvious error if server validation fails on any puzzles we make in the future!
Lewis: I've updated the author's notes with some more notes on inspiration on creating the puzzle, as well as even more nitty-gritty construction details.
Jakob: We wrote the clues and the answers at the same time. I use CrossFire as my primary crossword constructing tool, so I loaded up a 15x15 grid into CrossFire and added the cryptic-style black squares. Then I added enough extra black squares to make it pretty easy to fill, and started selecting entries that looked nice. I think FANTASY FOOTBALL was the first seed entry. We started the clue grid by writing the shortest clues, which were most constrained, and gradually added other clues shortly after adding the words. We left clues for things like FANTASY FOOTBALL for last, since we were confident we could make them work no matter what the every-7-letters constraints turned out to be. Once we had part of the grid filled in, it was a matter of iterating - trying to fill a corner, backtracking if we couldn't get it to work, and so on. For THE, we had a list of 11-letter ___ THE ___ phrases and we were choosing between them based on what the crosses were. The last part to fill in was the bottom right, by which time we had abandoned all thought of grammar or spelling, and we were happy to use something like "shapes having two plus three angle". By the end of the construction process, my eyes hurt from all the grey dots. If you don't know what I mean, stare at the second solution image (which is a screenshot of the construction sheet) for a minute.
We used the interactive fiction engine Inform 7, which is the most common tool people use for this type of parser-based game (interactive fiction / text adventures).
For this particular puzzle, we went about it in four steps:
- Choose the endings that we thought we could put in a text adventure, that fit the cluephrase
- Write out in words how we thought the endings would be achieved (e.g. the cage idea, the bathtub, etc)
- Design the layout of the world to make the numbers match up
- Implement all the individual components — this was by far the most labor-intensive part. The world ended up having over 30 "rooms" and nearly 200 objects to interact with.
Lots of testsolving was necessary to make sure they didn't interact in weird ways. At one point, if you brought a mouse into the spaceship, you were no longer able to talk to the ship because the game assumed you would be talking to the mouse. The solution to that particular problem was to tell the game that mice are not animals, but things. As evidenced by the errata we issued during the hunt, we still weren't able to find all the issues through testsolving.
The puzzle has four components:
- The hacked Pokemon ROM, running in an emulator
- A modified version of this script for sending inputs from twitch chat to the emulator
- OBS to actually stream the game
- A chat bot to remove disallowed messages
The actual maps were crafted tile by tile using ROM hacking tools such as AdvanceMap.
There are two main differences between novice puzzlers and experienced puzzlers: their ability to assess which ideas might be correct, and their ability to test out ideas as quickly as possible.
For the first, the best way to improve is to expose yourself to lots of puzzles, so you can be familiar with what mechanics and techniques exist. You don't necessarily need to solve them, but read and understand the solutions well enough to fully explain them to somebody else. Writing your own puzzles is also a great way to improve, as it will help you develop a sense for what can be reasonably constructed as well as what is elegant (something constructors often strive for).
For the second point, get as familiar with puzzle tools as you possibly can. Don't solve an anagram in your head; use Nutrimatic. If you're looking for a 7-letter verb that starts with PL, don't sit there and think, don't even write a program; use OneLook or something else. Develop a workflow for efficiently working on a puzzle (most teams use Google Sheets) and, if possible, include macros for doing common puzzle operations like indexing, removing spaces, and taking letters that appear in one string but not another. Wasted time adds up.
Check your work. It's very draining, so most teams don't do enough of this. As solvers, we've had tiny errors that have cost us hours, before we realized that we actually had the right idea hours ago. Puzzle tools help with this, since the more of your work you automate, the less likely you are to make an error.
TESTSOLVE EVERYTHING. Have people who are completely unspoiled on your puzzle try to solve your puzzles, and listen to what they have to say. Preferably do this twice with disjoint sets of people. Though we have many experienced writers, we can never predict how hard, long, or fun a puzzle will be, or whether it will have a problem or particularly nasty red herring we hadn't expected, until we testsolve it.
If you're a newer writer, try to construct your puzzles around ideas rather than around themes or references. Everyone will appreciate a good idea with a bad theme, but only people who like the theme will like the puzzle if the idea is lackluster.
Do others' puzzles and puzzlehunts. It will give you a better idea of what puzzle steps are fun and what steps aren't. Doing others' puzzles can also help with finding inspiration: when you're trying to figure out how a puzzle works, you might come up with incorrect ideas that you can later write into your own puzzles.
Keep a notebook or text file of potential puzzle ideas. Add to it often.
Other than that, write the types of puzzles you'd like to see in a hunt. Passion is the most important ingredient of a good puzzle.
For more specific tips, you might want to check out this guide that a couple of us put together a while back.
Many puzzles contain a central idea that its authors think will be interesting or fun ("What if solvers had to figure out how a word search generation algorithm works?" for Ministry of Word Searches). Other puzzles are built around a specific experience ("wouldn't it be cool if Dumb Ways to Die were a text adventure" for Unsafe, "wouldn't it be fun to race against the clock" for Race for the Galaxy). Either of these can work, but the most important thing is that you build your puzzle around an idea that you think is fun and trim anything that gets in the way of it.
For people newer to these types of puzzles, we'd first recommend solving a lot of puzzles to get a feel for them. For more experienced solvers, we'd love to see more people get some friends together and put together their own hunts -- there's no better way to practice.
We believe 2 teams forward-solved Observatory with 0 hints.
Many of us predicted how many teams would finish before Monday. Our average guess was 6 teams and our median guess was 5.5, and nobody guessed below 2 teams. All but one of us thought the first team would finish before Monday. We were indeed surprised at how off-base these predictions were!
Usually about 2-10 people online out of a pool of ~25 online at any given time to answer hints and emails.
We can usually tell whether a given solve was a backsolve, based on the timing of the solve relative to the meta solve, and the difficulty of the puzzle, but we can't always be sure. (Other guesses nearby can also be evidence either for or against backsolving - if a team is guessing answers from three letters of a meta constraint or guessing the same answer to all of their open puzzles, that's probably a backsolve, whereas if they recently called in a cluephrase or an intermediate step then it's probably not one.)
The stats page lists backsolves, which we calculate via a simple algorithm: anything solved five minutes before the corresponding meta solve and later is a backsolve. For this hunt, this is usually an underestimate, as several puzzles could be backsolved quite a long time before their corresponding metapuzzle was solved.
Neither of these were planned from the start. We added them in response to the intro round and the hunt being more challenging than we expected, so that teams who were interested in seeing a larger portion of the hunt could do so. That concern was particular to this hunt to some degree since we specifically wanted lots of teams to be able to get into learning the language, which was gated behind puzzle solves.
Seth: (C)het for ambiguity. This isn't relevant for anything :P
- Mike Teavee (we had to look him up, but everyone is better than Veruca)
Most of us went to MIT.
Colin: The moment from starting Stephen's Sausage Roll to finishing it
Nathan: same as Colin
Just as soon as we finally figure out a way to beat the paper tigers in the grand finals!
We write puzzles as a hobby in our free time, not at work. Managing the hunt doesn't require constant attention, so many of us are available as needed (and for conversations that come up in Discord) but otherwise not paying attention. All of us are always excited to see how the hunt goes, so we're usually a little less productive that week.
Most of the basic functionality is slightly expanded but largely the same as 2017's (teams, puzzles, answer submissions, hints). We've added a lot of functionality as needed to support features like rate-limiting, messaging Discord channels, and sending automated emails.
Chris: "Cusp" = sharp point. "-ation" = process. So it could be interpreted as "some process related to sharp points" such as getting poked. But mostly I just liked the way the word sounds. :)
Have you heard of Battle School? It's kinda like that. Maybe.
They've learned to play a little nicer, but they're still very much alive and happily munching away.
Nathan: The only one I've tried is Mark Bittman's, which is excellent (though you should add some mozzarella, I'm surprised the recipe doesn't have it). Make sure to use good tomato sauce though. I like Rao's Homemade marinara sauce since I'm too lazy to make my own, but it's expensive. Any San Marzano-based tomato sauce should work great.
Colin: I watched a few episodes but haven't gotten around to finishing it yet.