Energy Futures Database Tuning Reproducibility Research Teaching IT University

Contact Information

IT University of Copenhagen
Rued Langaard Vej 7
2100 Copenhagen, Denmark
(45) 72185369

Philippe Bonnet

Associate Professor and Marie Curie Fellow
Computer & Software and Systems Section
IT University of Copenhagen

Spring 2015: Teaching Big Data Processes and Database Tuning, Spring Office Hour Mon and Wed, 12:00 to 13:00.

Fall 2014: Teaching Big Data Management, Fall Office Hours Tue and Thu 12:00-14:00



Philippe is associate professor at IT University of Copenhagen. Philippe is an experimental computer scientist focused on building/tuning systems for performance and energy efficiency. Philippe's research focuses on IT systems that will contribute to the fossil-free society: energy-proportional systems, software-defined buildings, database systems on new hardware, secure-personal data management. Philippe is a Marie Curie Fellow.


CV @ linkedin   |    Publications: full text @ ResearchGate, dblp, scholar

PhD Students: Matias Bjorling, Niv Dayan, Jonathan Furst.

PhD Graduates: Javier Gonzalez, Joel Granados, Aslak Johansen, Martin Leopold, Christoffer Hall Frederiksen, Marcus Chang

Energy Futures

I am the anchor of ITUs activities in the area of Energy & IT. We form a network of computer scientists, software engineers, interaction designers, future archeologists and social scientists focused on prototyping together a range of possible fossil-free energy futures. The premise of our work is that the fossil free society will be a digital society. We aim at educating the next generation of IT professionals that will contribute to the necessary transformations of the energy system. My contribution is in the areas of energy proportional systems, software-defined buildings and secure personal data management.

Database Tuning

Database Tuning is the activity of making database application run faster. Twenty years ago, Dennis Shasha abstracted a set of principles from his experience tuning database systems. Ten years ago, Dennis and I devised a set of experiments to quantify the performance impact of these principles. Today, the focus on energy efficiency, the advent of new hardware and the evolution of database systems introduce new trade-offs.


The experimental results published in a research paper should be reproducible. In traditional fields, data acquisition is based on long term data acquisition efforts. In the context of some computational experiments, data acquisition is in fact repeatable. The data derivation phases should always be repeatable. I am looking into two sides of this problem. The first focuses on how to ensure reproducibility. This work was started by D.Shasha in the context of the SIGMOD conference in 2008. I was repeatability chair for SIGMOD and VLDB from 2011 until 2013. Stratos Idreos has now taken over this responsibility. I am editor in chief of the reproduciblity section of Information Systems. The second focuses on defining an infrastructure for executable papers, where the derivation phases can be re-executed and possibly modified or integrated in other workflows. This is joing work with Juliana Freire and Dennis Shasha. See our tutorial at ACM DL Author-ize serviceSIGMOD'12, previous articles (VLDB'11 (vision), ICCS'11) and a report from the ACM DL Author-ize serviceSIGMOD'11 reproducibility committee.

Current Projects

CLyDE: Cross Layer Database Engine for Flash-based SSDs. We aim at redesigning database systems for Flash Devices in the context of the CLyDE project (funded by the Danish Independent Research Council). Our goal is to explore how database systems can collaborate with the software embedded in flash devices, through so-called cross-layer optimizations, in order to improve overall performance. Our approach is based on the insight that flash devices can be programmed to provide predictable, high performance as long as the database system respects a set of well-defined constraints. This requires a complete re-thinking of the interactions between DBMS and secondary storage. This is joint work with Luc Bouganim at INRIA, Niv Dayan and Matias Bjorling.
  • Usenix'ATC 14: Michael Wei's and Matias Bjorling's new take on IO speculation.
  • VLDB'13: EagleTree Demo. EagleTree is a discrete-event simulator that encompasses SSD, OS and applications (more on Github)
  • ACM DL Author-ize serviceSystor'13: Jens Axboe's and Matias Bjorling's new Linux block layer adapted to SSD and multicore
  • CIDR'13: Vision paper on DBMS and 2nd storage in the age of flash and PCM
  • VLDB'11: Tutorial on system design for flash devices
  • CIDR'11: Bimodal Flash Devices
RUBICO2N: The premise of RUBICO2N is that people are at the heart of energy efficient buildings. The central hypothesis is that buildings will not reach carbon-neutrality unless (1) the false assumption of endless energy supply implicit in current everyday practices is changed, and unless (2) the myriad of complex, opaque building subsystems is made visible and easier to manage. RUBICO2N aims to enable a culture of flexible energy demand, based on new forms of human-building interactions resulting from the transformation of buildings into cyber-physical systems. This is joint work with Jonathan Furst and David Culler's group at UC Berkeley.
  • Sensys'14: Gabe Fierro's and Jonathan Furst's demo on their BUSICO 3D building simulation engine.

Past Projects

INTERACT: Ecologists instrument ecosystems with in-situ sensing to collect measurements. Sensor networks promise to improve on existing data acquisition systems by allowing instrumentation in new places, and by interconnecting existing stand-alone measurement systems into virtual instruments networked and controlled for higher utility and dependability. A key challenge is to design autonomous systems that control the sensor network to meet the scientists requirements in a dynamic environment.
uFLIP: Flash technology has great potential, promising increased throughput with reduced energy consumption. While flash chip behavior is very precisely specified, commercially available flash devices are not. Our goal is to understand the performance characteristics of these devices in order to a) compare the performance of competing devices, b) understand which class of flash devices best matches a given usage pattern, and c) influence the design of future devices.
Trusted Cells: With the convergence of mobile communications, sensors and online social networks technologies, we are witnessing an exponential increase in the creation and consumption of personal data. The World Economic Forum even formulates the need for a data platform that allows individuals to manage the collection, usage and sharing of data in different contexts and for different types and sensitivities of data. To meet this challenge, we propose the vision of trusted cells, i.e., personal data servers running on secure devices to form a decentralized data platform. Our first goal is to apply the vision of trusted cells to manage smart meter data. My Marie Curie fellowship project, called PDS4NRJ, focused on a decentralized data platform that enforces a usage control model and thus guarantees privacy while enabling innovative services. I was focusing on applications in the domain of smart home energy management.
  • TrustData'13: Paper on an an open execution framework suited for trusted cells.
  • CIDR'13: Vision paper on Trusted Cells.


Starting in Fall 2014, I am teaching the new Big Data Management class in the software development MSc program at the IT University as well as the Big Data Processesclass in the digital innovation and management line.

Since 2001, I have been teaching classes on Database Tuning. First at the University of Copenhagen, now at the IT University of Copenhagen. Together with Dennis Shasha, I have developed teaching material, including slides, experiments and case studies over the years. In the Spring 2015, I will inaugurate a new revision of the textbook and new versions of the experiments.