{provoke} research testbeds; considered harmful; {/provoke} (Future Internet - Dagstuhl Seminar)
26 March 2013
The best research solves the best problems. To find the best problems, look within the known wide-spread or large-scale problems. Almost everyone will know about the high-level problem; it is the successful researcher who looks at the details to find a game-changing specific problem whose solution leads to a breakthrough. In our profession, many of the best problems become apparent only when a system has been deployed for large-scale use. For example, it is hard to predict the creativity of hackers in a lab behind closed doors. Scalability issues often become apparent only after millions or billions of users adopt a solution and start relying on it for their daily lives. Cost effectiveness - often one of the hardest problems, often leading to exciting inventions - raises in importance when commercial deployments are being considered. Resiliency of a network is non-trivial to determine on a whiteboard or even in a lab. How do the various components that make up a network react to dirt, dust, extreme temperatures? Will they hold up? Will we have to design the system differently? All these questions have to be addressed when designing networks for the real-world. They are some of the hardest problems to solve. As such, experimental research, running code, and large-scale tests are key ingredients to making progress, having real impact, being relevant. While lab experiments provide initial insights into the feasibility of an approach, they rarely can address questions around scalability and resiliency. As a result, we often set-up larger-scale overlay testbeds that run as kind of a virtual network on top of production networks. Using examples from earlier work with experimental overlay testbeds and real-life production networks, this talk will address the need for experimental research but also discuss the challenges and shortcomings of overlay-based approaches.