Research Papers

How future buildings could redefine distributed computing

Qarnot

May 2018

One of the most important challenges for the Internet of Things (IoT) is the implementation of edge computing platforms. In order to improve the response time, the confidentiality, or the energy consumption of IoT applications, part of IoT services must be operated on servers, deployed close to connected devices. This said, much remains to be done for the practical realization of this vision. What are edge servers? Where will they be deployed? These are some questions that are poorly addressed in the literature. In this paper, we propose to build an edge computing framework around new datacenter models in which computing servers are seamlessly integrated in buildings. The datacenter models we are considering have been developed around the concept of data furnace. This concept has been implemented in European cities for district heating. Our paper introduces a new processing model in which edge and distributed cloud computing are operated with the same data furnace servers. We then discuss the challenges in the utilization of this model for edge computing. Our discussion deals with urban integration, service computing frameworks and performance issues. Finally, we end with an impact analysis of our proposition on the future of cloud computing.

Future views on waste heat utilization – Case of data centers in Northern Europe

February 2018

In this study the potential for data center waste heat utilization was analyzed in the Nordic countries. An overview of upcoming data center projects where waste heat is utilized is presented. Especially in Finland data center operators are planning to reuse waste heat in district heating. However, business models between the district heating network operator and data center operator are often not transparent. The implications of economics and emissions on waste heat utilization in district heating were analyzed through life cycle assessment. Currently the biggest barriers for utilizing waste heat are the low quality of waste heat (e.g. low temperature or unstable source of heat) and high investment costs. A systematic 8-step change process was suggested to ensure success in changing the priority of waste heat utilization in the data center and district heating market. Relevant energy efficiency metrics were introduced to support rational decision-making in the reuse of waste heat. Economic calculations showed that the investment payback time is under the estimated lifetime of the heat pump equipment, when waste heat was utilized in district heating. However, the environmental impact of waste heat utilization depends on the fuel, which waste heat replaces.

The Datacentre of the Future

Asperitas

2017

A datacentre is not about ICT, power, cooling or security. It is not even about scale or availability of these systems.it is about the availability of information. Realising this, allows an approach to the datacentre environment from a completely different angle, which opens the door to robust and simplified solutions for the biggest challenges facing the industry today. This document describes this approach.

Direct-to-chip liquid cooling for reducing power consumption in a subarctic supercomputer centre

Aalto University and UiT The Arctic University of Norway

2016

Abstract
Reduction of data centre power consumption is a timely challenge. Waste heat reuse is another focus area when developing energy efficient and sustainable data centres. And these two issues are interconnected through liquid cooling of server racks and/or direct-to-chip liquid cooling. Both of these solutions make it possible to transfer a significant proportion of the waste heat energy back to profitable use. Nevertheless, the heat reusing opportunity is not the only benefit direct-to-chip liquid cooling may offer. Another benefit is the notable reduction of power consumption related to cooling fans associated with server blades and rack-level cooling systems.
To evaluate this benefit, we performed power consumption and performance measurements in a subarctic supercomputer centre hosting a cluster of 632 blade nodes. Our study concentrated on a 47-node subset that we analysed when the servers were executing the LINPACK benchmark. Our
conclusion is that direct-to-chip liquid cooling can reduce the total power consumption, in this case, up to 14.4% depending on the inlet air temperature.

How Hot Does A Data Furnace Heating System Need To Be?

Green Processing

January 2015

When designing a data furnace based heating system for domestic hot water and space heating, how hot does the output of the heating system need to be to satisfy the needs of the home and its users?

Direct Liquid Cooling for Electronic Equipment

Berkeley National Laboratory

March 2014

Abstract
This report documents a demonstration of an electronic-­‐‑equipment cooling system in the engineering prototype development stage that can be applied in data centers. The technology provides cooling by bringing a water-­‐‑based cooling fluid into direct contact with high-­‐‑heat-­‐‑
generating electronic components.
This direct cooling system improves overall data center energy efficiency in three ways:
● High-­‐‑heat-­‐‑generating electronic components are more efficiently cooled directly using water, capturing a large portion of the total electronic equipment heat generated. This captured heat reduces the load on the less-­‐‑efficient air-­‐‑based data center room cooling systems. The combination contributes to the overall savings.
● The power consumption of the electronic equipment internal fans is significantly reduced when equipped with this cooling system.
● The temperature of the cooling water supplied to the direct cooling system can be much higher than that commonly provided by facility chilled water loops, and therefore can be produced with lower cooling infrastructure energy consumption and possibly compressor-free cooling.
●Providing opportunities for heat reuse is an additional benefit of this technology. The cooling system can be controlled to produce high return water temperatures while providing adequate component cooling.

 

Aquasar: A hot water cooled data center with direct energy reuse

IBM

July 2012

Abstract
We report the energy and exergy efficiencies of Aquasar, the first hot water cooled supercomputer prototype. The prototype also has an air cooled part to help compare the coolants’s performances. For example, a chip/coolant temperature differential of only 15 °C was sufficient for chip cooling using water. The air cooled side, however, required air pre-cooling down to 23 °C and a chip/coolant temperature differential of 35 °C. Whereas extra exergy was expended for air pre-cooling, the higher thermal conductivity and specific heat capacity of water enabled coolant temperatures to be safely raised to 60 °C. Using such hot water not only eliminated the need for chillers, it also opened up the possibility of heat reuse. The latter was realized by using the hot water from Aquasar for building heating. A heat recovery efficiency of 80% and an exergetic efficiency of 34% were achieved with a water temperature of 60 °C. All these results establish hot water as a better coolant compared to air. A novel concept of economic value of heat was introduced to evaluate different reuse strategies such as space heating and refrigeration using adsorption chillers. It was shown that space heating offers the highest economic value for the heat recovered from data centers.

The Data Furnace: Heating Up with Cloud Computing

Microsoft

1st June 2011

Abstract
In this paper, we argue that servers can be sent to homes and office buildings and used as a primary heat source. We call this approach the Data Furnace or DF. Data Furances have three advantages over traditional data centers: 1) a smaller carbon footprint 2) reduced total cost of ownership per server 3) closer proximity to the users. From the home owner’s perspective, a DF is equivalent to a typical heating system: a metal cabinet is shipped to the home and added to the ductwork or hot water pipes. From a technical perspective, DFs create new opportunities for both lower cost and improved quality of service, if cloud computing applications can exploit the differences in the cost structure and resource profile between Data Furances and conventional data centers.