Pilawa receives Google Faculty Research Award
Type the words series-connected voltage domain in a Google search, and as the servers at a Google data center respond to that query (along with millions of others), energy is being wasted due to inefficient system design. The same losses are true at any data center, which collectively comprise several percent of electricity consumption in the US.
As it is, the losses are produced during the conversion of high-voltage electricity that comes to the data centers to the lower-voltages that are used by the servers. Power conversion losses just create extra heat that you then have to spend more power trying to remove and cool, said Pilawa, a member of Illinois' electrical and engineering faculty.
But now, we were thinking, once you have hundreds and thousands of computers, rather than try to convert this big voltage down in many, many steps, why don't you stack up the servers, and they inherently get the voltage stepped down themselves. The power converters currently used to step down voltage will be eliminated, and the series-connection itself will provide the conversion. In doing so, the losses can be avoided entirely.
You can reduce the amount of power conversion you do by being intelligent about how you query which computer, how you distribute your computational load, Pilawa said. This requires a careful co-optimization of power delivery and computing allocation, something that is not currently done.
Given the large amounts of electricity consumed by data centers worldwide, this new system of power delivery could lead to significant reductions in overall energy consumption, as well as to financial savings for Google and other companies and institutions who adopt the technology.
The idea is motivated from Pilawa's previous research in solar power. You stack solar cells to create a solar panel, and those sort of systems have inherent voltage step-up inside of them, Pilawa said. For the data centers, those principles will be reversed, using series-connected domains to facilitate the efficient consumption, rather than production, of energy.
The series-connected concept has also been demonstrated for a handful of low-voltage microprocessor loads at a low power level by Pilawa's colleague Professor Philip T. Krein and his graduate student Pradeep Shenoy. The work on data center power delivery expands on this concept and attempts to scale it up to much higher voltages and a large number of servers. In addition, important considerations such as hot-swapping of malfunctioning servers and operator safety must be taken into account in this new architecture.
The Google Faculty Research Award will provide funding for one graduate student and two undergraduates, as well as some hardware prototype development. Pilawa is particularly excited about the undergraduate participation. In the preliminary proof-of-concept demos, done before the Google submission, two undergraduates were quite involved in both hardware and software development. We spent a few long nights—professor, grad students, and undergrads—in the lab, making the system work. So I'm very proud of their work on this, Pilawa said. To me, it's an important way for undergrads to get involved in research, and to see how some of the concepts they are learning in class are applied to solve very important challenges for society.
Beyond the financial significance of the award, Pilawa is looking forward to a two-way exchange with Google. You know, in academia, our job is to look further down the road in the future than companies. They look at maybe next quarter, next year, the next two years. We're taking a longer time horizon in our research, but it is always very important to get the interaction with industry, to hear what are their problems today, and then think about what will be their long-term challenges. Google is at the very forefront of efficient data center design, and it is very exciting that they have shown an interest in this next-generation data center power delivery approach.