HPE Lays A Big Long-Term Bet On Private Cloud AI At HPE Discover

HPE Lays A Big Long-Term Bet On Private Cloud AI At HPE Discover

HPE’s Antonio Neri and Nvidia’s Jensen Huang, dwarfed by the wraparound screens inside The Sphere, … [+] present their companies’ joint offering for AI private cloud.

Moor Insights & Strategy

Photos cannot do justice to the immersive experience inside The Sphere in Las Vegas, and a couple of weeks ago I got to attend the first-ever keynote held in the venue. The event was HPE Discover, the speaker was HPE CEO Antonio Neri (with an assist from Nvidia’s Jensen Huang) and the topic of the day was, of course, AI. In this analysis, I will do my best to capture the spirit of the Day 1 keynote and provide analyst opinions of the offering. Also check out deep dives from Matt Kimball on HPE’s new private AI solution (plus its strength in direct liquid cooling) and Will Townsend on HPE’s networking products.

HPE was in a bit of a difficult position as one of the last major enterprise infrastructure companies in 2024 to come out with its strategic answer for helping customers harness AI after Lenovo, Dell Technologies, Cisco, AWS, Microsoft Azure and Google Cloud. But the answer—Nvidia AI Computing by HPE, an AI private cloud for enterprise—I’ve determined after much thought, is a unique one. HPE has a long and fruitful history of working with Nvidia, and this tightly integrated joint offering, complete with software and hardware from both companies, is much more than a copycat of other Nvidia partner offerings such as Dell Technologies’ “AI factories.” It has been smartly targeted at the enterprise and designed to be utterly simple to implement for faster time-to-value. The company is trying to build an “easy button” for enterprises, and I believe it represents a differentiated long-term bet by HPE.

Supercomputing, Networking And Unlimited Intelligence

As with many of these keynotes, there was a slick opening video, although the dimensions of The Sphere made this one feel like a 3-minute IMAX movie. The inspirational piece touted AI’s ability to improve different aspects of life—energy production, medicine, agriculture and more. When he came onstage, Neri picked up on its “Intelligence has no limits” theme and tied it to HPE’s 80-year legacy of innovation reaching back to Dave Packard and Bill Hewlett. I liked the historical view, as it is strong. After explaining the company’s AI ethics principles and some of its efforts in data security, he turned to HPE’s work in supercomputing and networking.

The audience at HPE Discover must include more than its fair share of people who are mavens for science and engineering even beyond the immediate business aspects. Neri spoke to all of these people (including me) by going through some of HPE’s achievements in supercomputing, including projects in clean energy, dementia research, testing aircraft engines and more, not to mention the inclusion of its technology on the International Space Station and, someday, Artemis moon missions.

HPE’s Frontier supercomputer is the fastest in the world, and overall the company boasts four of the world’s 10 fastest supercomputers and seven of the top 10 green supercomputers. This is one area (AI is another) where its expertise in systems cooled by water, or direct liquid cooling, is vital. Neri is understandably proud of HPE’s strength in this area, which extends to a portfolio of 300-plus patents. My Moor Insights & Strategy colleague Matt Kimball, who is our in-house expert on liquid cooling, also attended Discover and includes more details about it in his writeup of the event. I want the team to dig in later and analyze the true amount of liquid-cooling differentiation as other infrastructure providers are making similar “best” claims.

Neri also spent some time talking about HPE’s prowess in networking, especially through the HPE Aruba business. This again connects to AI performance because high-quality networking is needed to bring the right data to the location where AI compute is happening, anywhere from the edge to the cloud. My colleague Will Townsend analyzes this in his research note on the networking advances shared at HPE Discover. The Aruba acquisition and integration was an absolute home run, and I look forward to seeing what team Neri can accomplish with the Juniper acquisition once it closes.

Nvidia Partnership And HPE Private Cloud AI

Neri segued to the main event by talking about how his company has partnered with Nvidia for more than ten years to build supercomputers and develop AI technologies. He then announced “Nvidia AI Computing by HPE” and welcomed Nvidia CEO Jensen Huang to the stage. (I find it interesting that Nvidia is listed first in the name of the product.)

These days, no enterprise AI event is complete without Huang. At HPE Discover, he emphasized some of his favorite themes with his typical clarity and energy. He sees the current shift to AI as “the greatest fundamental computing transformation in 60 years,” with attendant changes to “every single layer of the computing stack,” not to mention whole industries—and society itself.

To run AI, Huang said, you need a model, compute and data; to do it at scale, in practical terms you need a stack for each of these three things. But all of these stacks are complex—which is why HPE and Nvidia will do it for you with a single product that gives you a “world-class AI cloud,” according to Huang. If you opt for the on-prem version, HPE, Nvidia and their partners will get you all the help you need with training, model design, deployment and so on.

By this point in the presentation, there was already a big focus on making things easy, but then it got even easier when the two CEOs introduced the turnkey HPE Private Cloud AI. Neri said that this is “the deepest integration to date of Nvidia AI computing, networking and software with HPE storage, . . . servers and the HPE GreenLake cloud.”

Unlike competing services-led approaches, Neri added, “We deliver this as one integrated product.” The hardware and software is already packaged and ready to go, and a single dashboard can monitor all of the related AI services and apps. More than that, as Neri, Huang and the HPE demonstration techs on the show floor did not tire of explaining, “All it takes is three clicks.” Having seen the demo and received a personal dashboard walk-through myself, I can attest that this is accurate.

When a few other analysts and I talked with Neri in a small group after the keynote, he explained that the lesson of enforced simplicity was something he learned from the company’s GreenLake Flex program. Regarding Private Cloud AI, he told us, “I said I don’t want any—zero—configuration.” HPE offers Private Cloud AI in four sizes: small, medium, large and extra-large, with fuller features such as fine-tuning of models as you go up the ladder. Beyond that, however, “Everything comes with it,” Neri said, “but they [the customers] cannot desegregate anything.” Neri explained to us that this integrated solution is the company’s primary vehicle for large enterprise AI. Sure, any customer can piece-part a solution on its own, but this is where most of HPE’s technological, service, sales and marketing resources will go.

In my opinion from what I saw, this makes HPE Private Cloud AI even simpler to set up than some of the public-cloud AI offerings of the hyperscalers, with the possible exception of AWS Bedrock’s managed service. Plus, future flexibility is built in. All of the sizes are architecturally compatible and will remain so with later HPE–Nvidia products because they all run on Nvidia’s CUDA platform; this makes it easy to size up without needing to restructure anything. “Customers can invest at any point at any scale,” as Huang put it. HPE expects the offering to be generally available in the fall. General availability and customer stories will make the offering more real for everyone and will be needed to increase confidence in the solution.

Onstage at Discover, Neri also talked about delivering the most secure, sensitive data models with built-in compliance and explainability—a “pipeline” of models with “complete lineage, traceability and auditability for the entire AI lifecycle.” I’d like to know more about where the model training is done, and what the connective tissue is between HPE Private Cloud AI and the public cloud, if that’s where the models are being created.

The software contributed by Nvidia (green boxes) and HPE (teal boxes) is a major differentiator for … [+] the HPE Private Cloud AI offering.

HPE

Why Software Is Crucial To HPE Private Cloud AI

When I first looked at HPE Private Cloud AI, I thought that it seemed very similar to the Dell–Nvidia AI factory approach. But then HPE staffers answered a bunch of my questions and the differentiation became clear. The key difference: There is a full stack of software here.

On HPE’s side, this includes AI Essentials, a data lakehouse and so on, but also a full private cloud control plane. On Nvidia’s side, it includes the company’s enterprise AI software along with its NIM inference microservices. Considering Nvidia’s astronomical revenue growth and market valuation, it’s easy to focus on the $30,000-plus it commands for its top-end GPUs. Yet AMD and others have competitive chips for training and inference. Where Nvidia is truly locking out the rest of the market is software, not hardware.

It’s very clear from talking to Neri that this is an Nvidia-GPU-only solution for now. What would it take for AMD and Intel to become part of this? They would need a fuller software stack that they could plug in to become part of the combined stack with HPE; that means they need to bring their software up to the level that Nvidia has established, which could take a while.

If someone wanted to replicate the experience that HPE and Nvidia have created here, hypothetically it could be done by piecing together technology from AMD or Intel, Red Hat, VMware, Cloudera and others. But the resulting package would not be “AI in 3 clicks” without a ton of help from a global system integrator.

How HPE Private Cloud AI Stacks Up In The Marketplace

Given what other infrastructure vendors had already introduced, HPE needed to come out bold with its products at Discover 2024. With this turnkey solution targeted squarely at the enterprise, I believe it has accomplished that in fine style. It’s smart to go after enterprise use cases with private cloud technologies that those customers can readily understand. And HPE and Nvidia have gone out of their way to make it simple, simple, simple to use their joint offering on-prem or to run it as a managed service on HPE GreenLake.

Mind you that today the HPE GreenLake cloud supports more than 34,000 organizations. As Neri pointed out when the other analysts and I spoke to him after the keynote, that’s 34,000 potential AI customers already using the chosen infrastructure who can easily opt into whatever size they like of HPE Private Cloud AI. Speaking generally about enterprises, Neri said, “What they want is to deploy AI as quick as they can, if they’re gonna do so.” From what I’ve seen, they’ll be able to do that with this offering in next to no time.

HPE is hard at work on the business-value part of the equation, too. Neri said that HPE has “a very, very long list of use cases that we’re working through,” including many target use cases in supply chain, marketing and finance. Early results “have been very encouraging,” he said, with 20% to 30% improvements in productivity in some areas such as customer support. The thing is, for some functions the improvement hardly needs to be 20% to have a significant business impact. For example, if you can lower your supply chain costs by 2%, you can pass that through all the way to profits and be 2% more profitable in the end. When I discussed this with Neri, he agreed using a hypothetical but plausible example of his own: “If you look at the cost of sales, just 1% is a big number. A huge number.”

Neri knows that we’re in the early days of AI, and that there are considerable risks to go along with the potentially big rewards for the vendors that get it right. But if HPE and Nvidia can help their customers save money or make money in the ways just described, they should come out looking good. To support that, it’s vital that HPE gets a stream of enterprise testers and customers going public with what they’re experiencing to align with company claims.

I think HPE has laid a smart long-term bet on how to go about enterprise AI. The company has a solution that could make a difference for the enterprises that put it to work. Now, HPE (along with Nvidia and its go-to-market partners) has its work cut out for it to make sure that customers understand the differentiation and that it’s a serious software provider enterprises can trust.

Originally Appeared Here