Or: Have fun handing out wristbands to sixteen thousand people
Chaos Communication Congress has grown to be a huge event – this year, more than sixteen thousand people are expected to attend 35C3, and the numbers have been steadily growing to reach this point. It's an overwhelming event, organised entirely by volunteers, who take care of drinks, space allocation, medical services, logistics, finances, internet and phone services, content selection, talk streaming and recording, and so many other areas that I could fill ten blog posts just telling you about them.
Logistical challenges don't scale linearly with the size of an event. If you're involved with events of different sizes, you'll come to notice that there are certain levels at which requirements change – some challenges only start manifesting for large events, and sometimes the nature of the challenge changes very suddenly. For example, finding a venue for 200 or 300 or even 400 people is a very different matter from finding a venue for 600 people. One of the many challenges Chaos Communication Congress faced when growing was the entrance: checking tickets and giving out wristbands (or badges, before that) is a different challenge for two thousand attendees than it is for, say, fifteen thousand of them.
The check-in desk grew with the challenges, though: From a "finish the bloody software while the queue forms" setup,
things evolved by 2013 to custom software running on five beautiful custom-built boxes containing a computer, printer,
scanner, money drawer, and their own network and power distribution. As of a few years ago, we don't sell tickets
on-site, which is also a big help in reducing the waiting time – people fiddling with their money do not make for a
fast turnover, it turns out. We learned other valuable lessons, too: We have people sho
ving the queue to the empty
cashdesks, other people reminding the queue to have their QR codes ready (you might think that signs do the trick, but
they don't), and the volunteers staffing the stations have evolved with years of practice from being blindingly fast to
relativistic speeds. Yes, if you thought, you spent half a minute at the cashdesk, it really only were ten seconds – we
have the statistics to prove it.
Having learned the many, many painful lessons a long event queue can and will teach you, we decided to re-write the software running on our boxes three years ago. Our primary objective was to reduce the interactions a volunteer had with the software, to make the workflow even smoother, and to provide more help from the volunteers who run interference and provide the supply with new wristbands. While hacking on the software, we had the idea of figuring out how long people were actually standing in the queue – up until then we either asked them (and tried to filter out answers like "at least a million years, I counted!!"), or just walked along with somebody in the queue, which was often fun, but not exactly efficient. Instead we started printing QR codes, gave them to the last person in the queue, and asked them to show them alongside with their ticket, giving us fairly good estimates of how long people spent waiting.
In the spirit of making useful data publicly available, we've decided to launch c3queue.de. On this page you can see the data we collected on Day 0 and Day 1 of 33C3 and 34C3 (after that, there is no queue). As you can see, while we had no noticeable waiting time at 33C3 (which is quite a feat with 12k attendees!), people could wait up to an hour to redeem their ticket at 34C3. This had a couple of reasons: on Day 0, cash payment for public transport tickets was handled at the entrance (and for EUR 16.70, a very practical amount), which held up the queue a lot. We also started later than usual due to delays in other areas. We were also still getting to know the venue, leading to less than optimal queueing at times.
Not to mention the ten minute holdup created during the busiest time on Day 1. Imagine – masses of people waiting for their ticket, the waiting time looks to be up to thirty minutes already, and suddenly your server starts throwing errors. Sometimes tickets can be redeemed, but it takes many tries and things look to be getting worse. You look at the server, and its load is through the roof – which shouldn't happen, as you are on your own private network. First you are afraid that somebody gained access to it, but the requests look legitimate (the queue grows). You try to debug the network, the web server, the application server, and the database all at once, all hands on deck. All hands on deck means: two people debugging, in two places (the queue grows). Suddenly, one of the supporting volunteers notices something strange about the supporting laptop, and pulls its network access. Everything starts working again, after about eight minutes of a full stop (it felt like an hour, though). The problem was a jammed key on the supporter laptop. Naturally, it was the F5 key.
Anyways: This year, during 35C3, the page will show nearly-real-time data on waiting times. Happy Waiting!