Despite all the hype and excitement showered on the federal government’s adoption of generative artificial intelligence in recent years, agencies are still “very early on” in the journey to use large language models, according to Amazon Web Services’ public sector leader.
In an interview with FedScoop on the sidelines of the AWS Summit in Washington, D.C., Dave Levy, vice president of worldwide public sector for AWS, compared these early days of generative AI adoption to the advent of the internet — “when Netscape was the leader and before there was a Google,” whose search engine really opened the world wide web to vast public audiences.
But, Levy said he sees a period of breakthrough and “acceleration” coming ahead for those in the public sector, particularly federal agencies and some of the nation’s most important missions like research and health care.
“You’re gonna see some acceleration, and I think you’re gonna see some breakthroughs,” Levy said of federal generative AI adoption.
Federal agencies have been careful and measured in how they’re thinking about generative AI because. As stewards of the American public, agencies are accountable for ensuring they are responsible in their use of the technology, promoting safety, security and trustworthiness, among other things, especially when using the advanced tech to perform functions of the federal government’s most critical missions.
This notion was brought to mind Wednesday at the summit when members of an activist group disrupted Levy’s keynote, drawing attention to the company’s involvement, with Google, in a $1.2 billion cloud computing contract called Project Nimbus that supports the Israeli government and its military.
Speaking to the disruption, which halted his keynote several times, Levy said on stage: “We really are committed to hearing all of the voices around the world. We’re living in a time that’s very complicated and has a number of conflicts. And we’re open to listening to all voices.” He echoed that statement in the interview with FedScoop, saying that Amazon has been a part of domestic and global conversations on building responsibility into AI algorithms “from the beginning.”
The navigation of risk and the pursuit of trust — with industry partners and the American public — has made the adoption of AI more of a slow burn for some federal agencies. But many have taken at least a first step, if not more, to experiment with the nascent technology and apply it within their operations.
Even organizations with highly sensitive and classified data sets like the CIA and the U.S. Army, which both had technology leaders speak at the summit’s mainstage keynote on their generative AI usage, have found initial ways to apply generative AI across their enterprises — often in more administrative functions — and are exploring how to bring that closer to their missions, Levy explained.
As more agencies and organizations find success using generative AI, it will only attract more to follow in a similar way, Levy said, adding that there are examples “all over the world where governments are saying, ‘Hey, this is something where we can go in and create value for citizens.’
“I think when organizations start to see other organizations have real breakthroughs or real efficiency gains or you know, real opportunities, I think it’s going to get much more interesting,” he said. “And I think those organizations really want to move faster. We want to help them do that.”
Amazon announced a handful of new initiatives at the summit this week to do just that, notably launching what it called its “Impact Initiative,” committing $50 million in promotional credits to help public sector organizations innovate with generative AI using the company’s tools. On top of that, AWS has partnered with Anthropic, a leading generative AI developer, to make the company’s Claude 3 Sonnet and Claude 3 Haiku AI models available in the AWS Marketplace for the intelligence community’s use.
Of course, the adoption of generative AI, like the adoption of any emerging technology, comes with its transformational challenges. Levy pointed FedScoop to “data and skilling” as the biggest things that could hold an agency back from being able to “realize the promise of generative AI.”
“You have to get your data in the cloud. You have to, for a number of reasons. When you’re training a model or training data, it’s got to be in a place where the processing power can get to it,” he said. “You’ve got to get your data all in one place and in a state that is, you know, usable in that way. And once you do that, then you can start to think about … what are the opportunities I have with this data?”
Users also have to be trained on how to use the tools, Levy said, likening it to federal use of cloud and how it’s taken a dozen or more years for agencies to build the skills necessary to fully take advantage of that technology.
“Whether you build that skill up on your own team, which I think some agencies are doing, or you partly build it on your team, or you have partners that have those skills,” he said. “In some combination of that, I think that’s gonna be what gets agencies moving.”
Written by Billy Mitchell
Billy Mitchell is Senior Vice President and Executive Editor of Scoop News Group’s editorial brands. He oversees operations, strategy and growth of SNG’s award-winning tech publications, FedScoop, StateScoop, CyberScoop, EdScoop and DefenseScoop.
After earning his journalism degree at Virginia Tech and winning the school’s Excellence in Print Journalism award, Billy received his master’s degree from New York University in magazine writing while interning at publications like Rolling Stone.