Okay this is not strictly open source and making, but it is an interesting thought about what we do for susteinability and environment. Could you imagine that a cat picture is wasting our Earth’s resources? Facebook could, and is using a nice approach. And you, are you thinking to save power every day?
When it comes to measuring data center efficiency, the unit that is generally thrown around is called Power Usage Effectiveness (PUE). This is a ratio of how much of a data center’s power is actually used by the servers, switches, routers, and disks, and the total amount of power delivered to the data center. The lower the PUE number, the better.
You wouldn’t think a few cat pictures could consume so much power. But by 2013, Facebook had over an Exabyte of images shared by users via their timelines in the service’s “Haystack” photo store. Many of these images are rarely, if ever, viewed weeks after they’ve been initially shared. But Facebook’s data centers have to keep them available—and backed up as well in the event of a disk failure.
That means keeping staggering amounts of storage online. But Facebook engineers developed an approach called “Cold Storage” that allowed the company to keep more than half those disks powered off at any given time, dramatically cutting power consumption. Now the Facebook storage team is looking at cutting that down even further by moving older images to Blu-ray optical disks.
Ten years ago, data centers that had a PUE of 2.5 were common, meaning that more than half the power they brought in from the wire was either used for things other than computing power (like cooling and lighting) or lost during conversions from AC to DC and changes in voltage. Today, data centers in general have improved somewhat. A 2014 survey by the Uptime Institute found that the average self-reported PUE for large data centers had dropped to 1.7. But Facebook’s data centers have gone to great lengths to increase efficiency: the average PUE for Facebook’s Forest City data center over the last year was 1.08, meaning that a mere 8 kilowatts out of every 100 were used for something other than powering the racks. Facebook’s infrastructure team once referred to this as the “negawatt”—the power they never had to consume.
One of the biggest energy costs at many data centers is cooling. Giant HVAC cooling systems, usually using chilled water, blast cold air in ducts under the raised floors of data centers and direct it up through the racks of hardware. These systems then exhaust the heated air that comes out of the “hot lane” behind the racks to the outside. All those pumps and fans and heat exchangers consume a lot of electricity.
Rather than using chillers or other traditional air conditioning systems to lower the temperature of air piped into the data center, the Prineville site’s cooling uses “free air cooling,” or the process of harnessing the evaporation of water into the air to remove heat. Rather than keeping hardware frosty cool, the cooling systems Facebook uses keep the temperature within tolerances for the gear and rely largely on the power of thermodynamics and convection to move air through the data center. All this cooling is done in “penthouse” structures at the top of data centers, letting air drop down from above rather than forcing it through traditional data center raised floors