(Walter Sun, Global Head of AI, SAP)
Enterprise AI is a moving target. Address one jugular question, and three more crop up. One thing I know for certain: if software vendors develop AI in competitive silos, we’re unlikely to create the kind of AI that makes work better.
That’s why it was good to see a group of cohorts form a coalition on the impact of workplace AI: Leading Companies Launch Consortium to Address AI’s Impact on the Technology Workforce (the consortium includes the likes of Accenture, Intel, Microsoft, Cisco, and SAP – the subject of this piece).
The impact of AI on jobs is a potent, wildly misunderstood and often sensationalized issue, but it absolutely calls for planning, and immediate discussion. The consortium’s mandate is timely:
Working as a private sector collaborative, the Consortium is evaluating how AI is changing the jobs and skills workers need to be successful. The first phase of work will culminate in a report with actionable insights for business leaders and workers. Further details will be shared in the coming months. Findings will be intended to offer practical insights and recommendations to employers that seek ways to reskill and upskill their workers in preparation for AI-enabled environments.
The timing is good; I had already planned to revisit burning questions about SAP’s AI strategy before the fall event chaos kicks in. During the spring event season, I issued two missives on the core of SAP’s AI strategy:
But heading into SAP Sapphire, I had lingering questions, including a direct tie-in to this consortium: how is SAP approaching its own internal AI reskilling? (I did not hear much about this in SAP’s public earnings briefings, etc.)
Also, I had not yet met Walter Sun, who joined SAP in 2023 as SVP, Global Head of AI. At SAP Sapphire 2024, I had a revealing conversation with Sun. Looking back at the conversation with fresh eyes, Sun addressed some of the missing pieces from my prior articles. So, before SAP’s AI pursuits advance too far beyond this point, let’s get this context out.
My past conversations with AI Chief Officer Dr. Philipp Herzig were quite candid, but I didn’t know what to expect with Sun. In a hectic and notoriously over-refrigerated Orlando media/analyst center, I was about to find out.
Why don’t we build AI to challenge human biases?
One of my biggest frustrations with enterprise AI evangelists: I hear so much zeal about replacing what humans do, often beyond the scope of what these tools are capable of (exceptional content creation comes to mind). But I don’t hear enough about how AI’s strengths can balance human strengths, and vice versa. Example: AI excels at pattern recognition – so shouldn’t it be valuable in domains like HR, where it can help to flag/balance out human bias?
During our discussion of SAP’s embedded AI features, Sun brought this topic up himself, in the context of compensation management. From Q4 2023 to June 2024, SAP embedded fifty new AI-enabled scenarios in its applications, stating that it was “on track” to double that by the end of 2024. As Sun explained, SuccessFactors’ Compensation Assistant is one of those:
If you manage a team of people, and a reorg happens, or you inherit a new team; you get a bigger team. Every year, most companies have compensation rounds, or review rounds. It’s hard, because sometimes you don’t know the person very well, or don’t know the history. So this tool helps you pull data from someone’s entire company history, that looks at where their compensation level is: mean, median, and max. If it’s a top 1% performer, is he or she getting paid 1%? If they’re not a top performer, are they overpaid? And so the data allows you to look at it in less biased way.
In what way is it less biased?
Because we sometimes have this idea that the squeaky wheel gets the grease – so when someone complains about their compensation every day, [we start to think] this person is underpaid. But maybe he or she may not be underpaid. It could just be a person complaining a lot, but the data removes biases like that, right? So that’s a tool that we think managers can use.
But Sun thinks this can help employees just as much as managers:
On the flip side, we’ve also enabled features for employees as well. Employees can get these generative tools to help write their own assessments, as well as writing assessments for their colleagues. These tools have policy checks, and it can make sure the text you’re writing isn’t isn’t biased.
For example, if you write, “This person’s a go-getter,’ or ‘This person works hard,’ – those things aren’t quantifiable. So the tool can come back and say, ‘Hey, give me impact.’ Such as: ‘This person helped me ship these features.’ Or: ‘This person collaborated with me well, and took the time to develop specifications for this document.’ So these tools can help you write better reviews and assessments.
De-bias the humans – and de-bias the machines
Here’s a plot twist. sometimes human team members don’t like the results of “de-biased” output. But I’d argue that’s the right problem to surface, an example of AI provoking a necessary conversation. As Sun told me:
I’ve been managing for many years. We have these conversations all the time where we say, ‘Hey, let’s remove the names from the board – and let’s talk about the information.’ It’s surprising when you do that blind test, things change sometimes… People have reputations, good or bad. And that’s not necessarily a good thing to have; machines don’t care about [those] biases.
But the reality is, humans who have biases don’t like some of the results computers give when they’re not biased – once you remove the names, and just give the machines information. Then you can remove the biases, and make sure you’re compliant in terms of policy. Machines can be biased, and the algorithms can be biased, but it’s also good to use AI to help remove biases.
Indeed, machines bring biases also – though I’d argue that most of the time, those biases are due to the all-too-human biases in the training data. Still, AI developers need to be aware of these risks. Without due diligence, AI design can make biased AI output worse. Sun raises a counterpoint: we should aim to “de-bias” the machines.
Certain machines understand how to de-bias algorithms. For instance, if you want to change the composition in your workforce, you can take an algorithm and normalize it and say, ‘Okay, this is training data, but this training isn’t what I want.’ So I can re-weight it. For instance, we have three of one type of person and one of another, and you want it to be 50/50 – so the weight of the three people should be divided by three, so that you have evenly weighted the model.
SAP’s AI ethics oversight – how does it impact AI development?
But hold up – I see an ethical elephant in this particular room; I suspect our readers do too. Doesn’t the compensation assistant example venture into what’s considered a high risk scenario as per the EU AI ACT, which I know SAP is serious about adhering to? For this type of functionality, did Sun’s team consult SAP’s AI ethics board? (Precisely, SAP has an AI ethics steering committee and an advisory board). Short answer: yes. In our last interview, Herzig shared an example of when SAP stopped development on a particular AI feature due to ethics questions. Sun detailed how this works:
All generative AI use cases need to go by the policy team… You have people from legal, people who work with governance, and engineers as well, because I think you need both sides of the picture. The policy team provides feedback, and looks at the risks and harms related to every channel – what’s the worst case that can happen, and then discuss it. We need to reach a consensus before anything gets gets released.
These assistants can provide recommendations, but it can’t be the final answer. Obviously a human needs to review it, and that provides some of these safeguards. The manager looks at it before they submit anything to the employee. It’s mostly an advisory, versus it being the final say… We don’t want an algorithm to say, ‘Pay this person 10% more; pay this person 10% less.’ That’s just the first level of information. Using this information, you can use it as guidance to make a good decision.
So yes, the compensation assistant went through SAP’s AI ethics steering committee. But that’s not the end of it, right? Won’t there be a future point where this type of scenario needs a fresh review? Sun:
The regulation space is always changing. Anytime someone creates updates, we have to review it, to make sure there’s no new [compliance] issue that didn’t exist before.
What does “AI reskilling” offer SAP employees?
On January 23, 2024, SAP announced a major “transformation program.” This program included a restructuring of 8,000 positions. The financial markets responded favorably. I did not. I’m consistent on this point, and it’s nothing personal to SAP; I’m not a fan of anything that resembles a headcount reduction, no matter what you call it – and we’ve seen a lot of that with tech vendors the last couple years. (at the time, SAP stated that most of the departures would be “voluntary.”)
Over the course of 2024, this has not been easy for SAP employees (I know this firsthand, because I’ve heard from a number of them). On the other hand, I am generally in favor of internal transformations – especially for large companies like SAP. Business model shifts are also about skills and culture. So it’s only fair to hear about the other side of this: where do employees go from here?
SAP’s January announcement included mention of investing in AI reskilling. At SAP Sapphire, I had a chance to ask Sun: what does this AI reskilling look like in practice? The way Sun explained it, AI reskilling at SAP sounds less like something for specific individuals in transition. It comes off as a company-wide training initiative, including hands-on practice with LLM chat prompts:
We provide “AI days,” which are basically a full day of sessions, educating people of all disciplines. Coders need to know what tools are available, but people in marketing, people in sales need to understand ‘What is this generative AI feature?’. Furthermore, we’ve actually enabled this playground internally, where you can access generative AI in an enterprise-compliant way.
In the enterprise, you just don’t want new information to be used in consumer models like ChatGPT. So we have an internal tool that people can use, with enterprise security. I think that’s the best way to upskill people: what is a Large Language Model? How do you prompt a machine?
So do all employees inside of SAP have access to this reskilling?
Everyone has access to attend these sessions… I think we had over a million uses [of the gen AI playground tool] within a couple of months. People are trying it out for creating emails, just to get an idea of how these models work.
This helps us in two ways. One is that it upskills employees.Two is that these employees are subject matter experts in their lines of businesses. So they can come to us and say, ‘Hey, I was playing with this. Your team in the AI organization should build these features.’ So it’s a win-win situation both ways.
My take – on AI’s job impact and SAP’s reskilling pursuit
Publishing this piece amidst SAP’s jobs consortium news seems timely. I am not an AI jobs alarmist; I believe this generation of AI tools has inherent limitations that will prevent mass unemployment. But, in some fields, the gen AI jobs impact will be strong. Gig economy creatives, especially in graphic design, have already been hammered.
Computer programmers are going to feel this jobs impact, as are “creatives” in corporate settings (though I’d tread carefully there; human creativity matters to brand impact). Certainly customer support teams will be affected, though I worry that companies will do the AI overreach and prematurely displace workers, to the detriment of both ill-advised job cuts and the customer experience dealing with these techno-dystopian call centers. The point is: wherever you stand on the AI jobs impact, the conversation needs to happen now; I look forward to the consortium’s first batch of research findings.
I hope SAP shares more of its AI reskilling pursuit; SAP customers will benefit from those firsthand accounts. In my discussion with Herzig on the rise of the prompt engineering, we hit on why he thinks this skill set will matter. But how will that play out in practice? To what extent will “prompt engineer” be a technical specialist role, versus a business user skill requirement? That’s just one question SAP will be in an excellent position to address.
My views on SAP’s AI strategy are mixed, but I do agree that SAP has a big opportunity to help customers succeed with AI, whereas customers’ own AI pilots may struggle. I won’t rehash my criticisms of SAP’s AI moves here, but SAP’s AI strengths are emerging also. Those strengths center around data privacy, ethics, and the responsible use of customer data – along with the investment in gen AI architectures to improve output accuracy, and reduce the hallucination factor. That’s critical, because this generation of AI systems will require customer and industry-specific data to deliver the kind of value enterprises expect. No, this won’t eliminate the output accuracy problem, not yet, but that’s another conversation.
When it comes to answering thorny AI questions on the record, few vendors have been as candid on-the-record as SAP. I frequently hear from customers who are frustrated with vendors on this topic; they want a look under the hood. That means opening up on architectures, training data, external LLM use, customer data movement, log file generation – and also how ethics intersects with development. If ethics doesn’t intersect with product, then the “responsible AI” talk is just window dressing, if not outright hypocrisy.
Next time I talk with Sun, I want to ask him about the tensions between generative AI and ESG. More studies are coming out that document the energy consumption differences between generative AI, and even other “classic” forms of deep learning. I see some ways that can be mitigated, but for a company as serious about ESG as SAP, clarity is needed here.
I don’t have those answers from SAP yet, but that’s how AI goes. New questions are guaranteed.