- System architectures for datacenters
- Processor architecture and microarchitecture
- Memory systems and interconnection networks
- Systems with quality-of-service guarantees
I am a lecturer (assistant professor) in the School of Informatics at the University of Edinburgh. My work focuses on improving the efficiency of large-scale datacenters (think Google or Facebook) through improvements to server processor architectures, memory systems, and interconnects. To understand why this is important, read below.
Previously, I was a post-doctoral researcher at the Parallel Systems Architecture Lab at EPFL, working on Scale-Out Processors and other fun projects. I did my PhD in Computer Science at The University of Texas at Austin. My thesis focused on scalability and quality-of-service in on-chip networks of highly-integrated processor chips.
Why datacenters (and my research) matter?
As mobile computing and cyber-physical systems displace traditional forms of computing, datacenters will shoulder the data-crunching burden. There are three reasons for the growing reliance on datacenters. First, mobile and embedded systems are inherently constrained in their processing capabilities due to limitations of battery technology, low thermal ceilings, and form factor considerations. Second, the important applications of today (e.g., search, social networking, business analytics) draw on enormous volumes of data and have massive processing requirements that are well beyond the reach of individual servers. Last, businesses of all sizes are increasingly moving their applications to the cloud for reasons of scalability, resiliency, and operational efficiency.
A modern datacenter is a football-field sized installation that houses tens of thousands of servers, draws 5-20 MW of power, and costs over $100 million to deploy. A 2010 study estimated the global datacenter energy footprint at 1.3% of the world-wide usage. This number is widely expected to grow considerably in the coming decade due to the rapid pace of deployment of new datacenters and the developing world coming online.
Energy scalability of datacenters is an important problem with major economic and environmental implications. As the semiconductor industry inches toward the physical limits of voltage scaling, improvements in energy-efficiency of future chips will require much more effort than in the past. The same is true for conventional memories and networking technologies used in datacenters. The time is right to re-invent server architectures by specializing processor chips, memory hierarchy, and networks for tomorrow’s datacenter computing.