“Power Hungry!”

I started responding to a couple of the other threads on energy here, but I think this topic deserves its own.

The latest issue of Bloomberg BusinessWeek has a story titled “Power Hungry” (no link, sorry). It details some of the trials of various power companies, particularly along “data alley” where power consumption is jumping at rates even they can’t believe - to the point where they have had to deny some new AI and data centers permission to connect for months, even years until the grid can be improved in that area.

While national energy use continues to climb, it’s at about 1% per year. In some of these corridors it’s jumping 5% or more per year with no end in sight. And, by the way, here comes AI for you and all your friends, which is as power intensive as anything going today, including crypto mining.

It’s not just the explosion in data centers that has power companies scrambling to revise their projections. The Biden administration’s drive to seed the country with new factories that make electric cars, batteries, and semiconductors is straining the nation's already stressed electricity grid.”

And

In Virginia, which bills itself as the world’s best hub for data centers, about 80 facilities have opened in Loudoun County since 2019 as the pandemic accelerated the shift online for shopping, office works, doctor visits and more. Electricity demand was so great that Dominion Energy was forced to halt connections to new data centers for about three months in 2022. That same year the head of data center company Digital Realty said on an earnings call that Dominion had warned its big customers about a “pinch point” that could prevent it from supplying new projects until 2026.

I have said before, our thirst for energy is infinite. As soon as we find another way to make it, we use it up and want more. Which brings us to:

Soaring electricity demand is slowing the closure of coal plants elsewhere. Almost two dozen facilities from Kentucky to North Dakota that were set to retire between 2022 and 2028 have been delayed, according to America’s Power, a coal-power trade group

We are so screwed.

8 Likes

Yeah. How long before an ambitious tech entrepreneur develops a way to convert the 100 watts/hour that humans produce to usable electrical energy?

1 Like

Do you want the blue pill or the red pill?

2 Likes

We have that, don’t we? It also competes with those weight reducing pills.

image

The generator is 500 watts,

OCD: no real meaningful thing as “watts/hour.” You mean watts or watt-hours (if measuring over time)

Mike

4 Likes

It’s difficult for the average person to contemplate how much power modern data centers are drawing. I saw similar stories to this in the past couple of months that started with references to electricity demand spikes caused by cryptomining then began referencing some of the data centers being built explicitly because of AI demands.

There are communities with data centers in their midst that have seen electric rates skyrocket because the neighboring data center (whether its operators and tenants knew it or not) was seeing 30-50 percent of its “compute” being expended on cryptomining software. It might have resulted from legit cryptominers buying the compute power above board and using it or it might stem from tenants having their existing systems infected with malware that was performing mining with stolen compute. Either way, demand goes up, the utility pays more on the spot market to meet the demand and ALL electric customers pay the higher rate.

How much power are we talking?

Here’s the mental exercise I started thinking through… Everything is based on Ohm’s law V = iR and a related equation Power = VI

Traditionally, people think of a light bulb as a small power load. It actually IS a lot of power. An incandescent 100 watt bulb plugged into a 120 VAC outlet is drawing 100/120 or 0.83 amps of current.

A typical Intel i7 CPU in your laptop or desktop computer might draw about 10-20 watts when idling or when the user is doing simple stuff like typing in a word processor. If the processor is actually busy recalculating a spreadsheet, streaming a YouTube video in a browser or playing a video game, the power draw will be closer to the CPU’s limit which is around 65 watts.

If you have a souped up dedicated gaming machine with a $700 video card providing a separate Graphics Processing Unit (GPU), that GPU card alone might draw 200 watts during heavy rendering.

So a single “home use” PC might draw up to 265 watts during heavy use.

In a data center environment, a typical server might have two CPUs, 64 or 128 GB of RAM and a bunch of hard disks or solid state drives installed pulling power. But software loads are mapped to servers to keep them as physically busy as possible. While a home PC might run at 100% of its capability for 10% of the day, a server in a data center might operate at 85% of peak power continuously, 24x7.

As an example, a 2 rack-unit Dell PowerEdge R750XA has two Intel Xeon Silver 4310 CPUs that can draw up to 120 watts each. Assume memory draws about 3 watts per 8GB (total of 48 watts for 128 GB RAM) and drives draw about 3 watts each (12 watts total for 4 drives), the server’s peak power draw could be 300 watts. If that server operates at 85% of peak 24x7, that’s a draw of 300 x 0.85 = 255 watts per server.

One standard rack in a data center provides 48 “rack units” of vertical space so one rack can fit 24 of these 2RU sized servers, so a single rack might draw 255 x 24 or 6.12 kilowatts of power per rack. With 120 VAC power, that requires delivery of 51 amps to each rack.

If those servers are being installed to process AI data, each server will also include GPU cards like a home PC. They won’t be used for driving displays but the memory and processor threads used for graphics rendering are equally valuable in crunching the matrix mathematics associated with creating and using large language models. At a minimum, that adds another 200 watts of power draw per server, bringing that draw up to 500 watts, again being utilized at least 85% of the day each day, 24x7. That’s 500 x 0.85 x 24 = 10.2 kilowatts per rack.

So how much processing power is being added to meet demands for Artificial Intelligence uses? Microsoft alone is planning to build a campus of data centers to house at least one million “AI chips”, likely a reference to a single GPU card.

If Microsoft’s server design fits two GPUs per 2RU server, power consumption goes up by another 200 watts per server to 700 resulting in 700 x 0.85 x 24 = 14.28 kilowatts per rack. If Microsoft is building space for 1,000,000 “AI chips”, that’s

servers = 1,000,000 GPUs / 2 GPUs per server = 500,000 servers

which is

racks = 500,000 / 24 servers per rack = 20,833 racks

which is

power = 20,833 x 14.28 kilowatts/rack = 297,500 kilowatts

Over the course of a 30-day month, that’s

monthly AI power = 297,500 x 24 hours x 30 = 214,200,000 kilowatt hours

For comparison, you can look at your own electric bill and calculate how many equivalent “yous” there are in that total. My peak electric bill occurs in August of each year and amounts to 540 kilowatt hours in the month costing $93. The monthly power draw of this planned Microsoft data center is equivalent to

Equiv households = 214,200,200 / 540 
                 = 396,667 individual households

Assuming I haven’t mis-converted units between watts, kilowatts and kilowatt hours somewhere, that is a staggering multiple. And that power draw is CONTINUOUS throughout the year, not seasonal just for a couple of months.

As an additional caveat, the type of GPU likely to be purchased for core AI functions is not a typical $700 gamer GPU or even the more expensive $1800 deluxe GPUs used by fanatical gamers. Nvidia makes a different line of GPUs, an example being their A100 GPU, that costs nearly $10,000 per unit and draws 400 watts peak per card. These require installation in sets of 4 or 8 using a special interconnect board provided by Nvidia which fits into a server chassis that is typically a 4RU height chassis so if the A100 is used in this new data center,

  • one server = 8 GPUs
  • power per server = 300 + 8 x 400 = 3500 watts
  • servers per rack = 48 / 4 = 12
  • power per rack = 12 x 3500 = 42,000 watts
  • GPUs per rack = 96
  • racks for 1,000,000 GPUs = 1,000,000 / 96 = 10,416 racks
  • total power = 10,416 racks x 42,000 watts/rack = 437,500 kilowatts

As these numbers illustrate, it’s a pretty easy case to argue that any advances we are making in improving energy efficiency to combat global warming are being swamped by additional electricity demand for AI.

WTH

9 Likes

Thanks for all those numbers…

Just to add to this. Big server GPUs are now generally not PCIe cards like you put in a desktop PC which have a power limit of (with extra power cables) of 300-350 watts. But the newer form factors (SXM or OAM) have limits of at least 700 watts. If the limit is higher you can bet that much is being used.
For example a DGX H100 system (one dual socket CPU server w/ 8 GPUs) has a max power of 10.2 kw.

Mike

3 Likes

Some forward looking companies like Apple and Google have built giant solar farms.

Notice the date, 2016!

In the late 1990s there was a similar outcry over the need for more coal. Sorry, I can’t find any useful links.

The Captain

We need the wealth to drive the green initiatives.

We are $34 tr in debt. We have a trade imbalance. As the metrics change in relative terms, we can afford to do the right things.

One final note. These numbers are a bit simplistic in that they only address the drain to power the servers. As calculated above, the typical power draw for one rack of servers ranges from 6,000 to 10,000 to 14,000 watts. That actually creates two problems to solve. You have to DELIVER that many watts to the rack, either in 120 volts AC form if using typical servers with AC power supplies or in a lower voltage DC feed. (Google builds their own servers that use DC power to eliminate more electronics from each chassis and reduce power loss through AC/DC conversions.).

But after the hardware consumes that electricity, it’s creating enormous amounts of heat that will destroy the servers if not removed. You know what it’s like staring at a 1000 watt hair dryer when it’s running. Imagine the heat given off by 6 or 10 or 14 hair dryers running continuously. Now imagine the additional AC power required to run chillers and air handlers to draw away that much heat from 10,000+ racks. Even locating the data center in a cold climate where outside air can help lower temps doesn’t help that much. The power draw estimated above might be off by 50-100 percent.

WTH

3 Likes

Here’s a recent thread I saw about cooling server farms -

2 Likes

Use the heat to heat/cool houses, offices, etc nearby. It is not geothermal, but same concept.

1 Like

Combinations of fans, water cooling systems were used way back on what was then, in the '90s, small server farms for Ma Bell data servers… One local had a sustain that made ice at night, used that ice during the day, to keep things cooled down… Another site I was in was over in the Sacramento Valley, a huge site, the air handling ducts were massive, supported on 6" channel iron, and the rumble beyond where I was, was as impressive… Hard to imagine today’s server farms as they were scaled up from there… Thanks to WTH for the detailed math behind what it must be today.

As with what was electronic switching for Ma Bell, as we moved away from electromechanical and heavy DC loads, the ESS equipment loads dropped a lot, just more efficient, so on 6 story Central Office that ran a 35-40,000 amp, (-48V, +24V DC) busy hour load dropped to 10-15,000 amp load, better, easier on the backup generators, but still heavy and still a lot of fans and air handling going on… A lot of copper went into handling all those heavy loads, quite an investment, and a lot of labor as well, from installation into maintenance… A lot of changes in the last couple decades…

4 Likes

Not necessarily. One solution is the Power Purchase Agreement (PPA), which is becoming increasingly common with big power users like data centers. In a PPA the data center makes a 10+ year agreement with a developer to build renewable energy generators from which the user buys energy at an agreed upon price.

PPAs have become a significant driver of renewable energy, particular solar, expansion. PPAs are also a big reason for the current economic growth of the sunbelt states, as energy-hungry businesses look for places with lots of sun and cheap land for windmills and solar generators.

This seems like a classic win-win situation. The data center gets energy usually at below utility costs while the renewable energy developer gets a guaranteed revenue stream.

2023 was a record year for PPAs, Europe in particular. Not surprising given the higher energy prices across the Atlantic.

3 Likes