03. marts 2026

Making AI in HR Deliver Stakeholder Value - The Risks and Returns of AI in HR

In the midst of the AI-revolution, HR-leaders and -functions must step up and dare to reimagine the scope, contribution and responsibility of our HR-functions. The positive outcomes of AI-enabled HR need to include both an assessment of how AI can improve stakeholder value - e.g., lower costs due to more efficient operations, increasing revenue and improved customer engagement - and an evaluation of the risks associated with such interventions.

By Dave Ulrich and Camilla Ellehave

Dave Ulrich’s Outside-in logic has long reminded us that if there is no marketplace, there is no workplace. In other words, HR’s worth should be measured by the value we create for customers, investors, communities and partners through internal resources – not by the efficiency of HR's internal processes. Too many HR-functions are not realizing the opportunity to radically redefine HR and make the function more effective - instead many are frantically trying to avoid getting left behind and are helplessly stuck trying to use aged practices and mindsets to optimize the efficiency of HR.

The ROI of AI interventions in HR should be assessed against the value that these are able to create for the stakeholders of the organization. Yet, the Return and Risk of the AI interventions are often left unsubstantiated - and therefore the returns are often left unrealized and the risks overlooked. As a strategic function, HR needs to reimagine our products, processes and delivery model and take on the full-responsibility of enabling the organization to deliver value by the use of AI.

8 ways to qualify the AI-Return on Investment

The old reflex of thinking that the task of HR is to automate or digitize existing HR processes might be comforting. However, it still rhymes with leaving money on the table, and opting for delivering a faster horse rather than designing the car. For HR to be able to deliver value from the AI interventions, we need to consider 8 crucial elements to realize the maximum.

Returns on AI investments:

  1. Strategic Alignment. AI investments are potentially expensive and disruptive. Understanding whether the intervention will accelerate the business strategic objectives of the organization must be the first step. If you are competing for a position of high quality and customer-centricity, AI interventions should focus on strengthening this position -and less on the cost-efficient interventions that may divert the necessary focus and resources from what will drive the intended stakeholder value.

  2. Data Availability. For AI to accelerate the business performance, the organization needs to have data available in order to make better analyses, which in turn will further qualify business decisions compared to those of the competitors. The business outcomes are only as good as the analyses, which are only as good as the data used to substantiate them.

  3. Data Quality. For decades, the return on our Business Intelligence has been hampered by the lack of reliable data. HR data seems to be particularly difficult for organizations to clean and and to ensure for validity and reliability. While HR IT and IS have been growing, and growing in importance, they remain an immature appendix to the strategic HR agenda in most HR-functions.

  4. Gain size. For many, if not most, AI interventions, we are venturing into the unknown- unknown, which makes any reliable assessment of the potential gains of the investments a central challenge for HR-functions. To add injury to this insult, most HR- departments lack financial acumen and have limited experience with making solid, quantifiable business cases for the results of their HR-interventions.

  5. Change size. With big bets come the expectatons of big returns. Yet, most of the current AI initiatives implemented in HR so far have come with big investments and delivered marginal returns. When we keep focusing on making incremental, efficiency improvements, we miss the mark and the opportunity to take advantage of the truly transformative potential of AI. The HR-function needs to start reimaginating the products and the means to creating the stakeholder outcomes. Making AI summarize candidate CVs for hiring managers may seem like a great idea. It is less so, if it stops the HR-function short of imagining initiatives that bring in candidates who add more stakeholder value, do it faster and stay longer.

  6. Organizational Change Capacity. The pace and magnitude of changes in external trends (political, economical, social, environmental) in addition to the technological revolution we are currently experiencing add complexity and create a constant stream of changes for most organizations to stay relevant and competitive. The stream of changes challenge the “change bandwidth” of organizations and their employees. The gains of AI interventions need to include an assessment of how these interventions add to an already stretched organizational change capacity, or if AI could actually help reduce the experienced changes from an employee perspective.

  7. Solution Complexity and Scalability. The returns of an AI investment can be seriously reduced if the AI-solutions cannot easily be integrated into the existing IT-infrastructure, or if it remains a one-off, tailored solution to one business unit, geography. Before we fall in love with the bells and whistles of the tech, we should consider how this fits into the future IT-landscape and how it could create the returns for a larger population of stakeholders.

  8. Capability Availability. Similar to the assessment of the organizational capacity for change, we also see returns of AI investments radically reduced by the inability of organizations to attract and/or dedicate the necessary resources and talents to make the interventions “short and fat”. AI interventions tend to suffer from the same challenges as other projects that become “long and thin” (read: spread thin over an extensive period of time with limited progress), when resources cannot be found or dedicated. The market for AI-talent has been increasingly competitive, and organizations need to weigh the expected AI-returns against the availability of AI-capabilities.

Eight Risks of AI

As the AI (r)evolution continues, remembering an old adage might be helpful: “There is no such thing as a free lunch.” AI use does come with its costs; but rather than proclaim the downsides of AI, let me discuss eight risks that - once identified - might be managed.

  1. Information parity. Relying on AI information reduces variance. When an organization wants to invest in an HR initiative (e.g., criteria for talent acquisition, leadership development, etc.), AI can and often is used to share what has been done by others. To upgrade plant managers, a manufacturing company used AI to quickly define seven key competencies of effective plant managers in their industry. My simple question was, “How many of your competitors have done the same exercise and identified the same seven competencies?” All of them!

    AI reduces variance by sharing information - often with clever prompts - that anyone, anywhere, anytime can access. Benchmarking that might have taken a team weeks or months, can now be done by an individual in minutes. But remember that a primary lesson of benchmarking is not to do what others have done or are doing. The point of benchmarking is to go beyond, seeking “next practices.” If everyone had the same competencies for example, why should customers choose one organization over another for the product or service they want?"

    The risk of information parity can be overcome by focusing on differentiation that leads to advantage by using AI as a starting point and foundation. The manufacturing company identified additional unique plant manager characteristics consistent with their desired identity and culture.

  2. Cognitive decline. Muscles grow with exercise and atrophy with disuse. Research has shown that students who rely on AI to prepare their papers experience cognitive decline. The risk of cognitive decline increases when depending on AI to do a paper, report, presentation, or document. Mitigating cognitive decline comes when AI information couples with human creativity and insights that lead to innovations.

    One firm sourced AI answers to problems, then they had groups discover how to move beyond, tailor, and implement AI-reported answers.

  3. Wrong or misleading information. Most people have used AI to generate information on a topic of personal expertise and discovered inaccurate or incomplete information. For example, have AI write your obituary or resume to see how accurate it is -especially in the details.

    In addition, AI may also provide misleading information. For example, we have studied HR competencies with over 100,000 respondents over 35 years while others have done so with a convenience sample of friends on LinkedIn. Too often AI equates the two studies, which is misleading. Or, for another example, I have consistently defined competence as individual ability and capability as organizational ability in a number of books (since 1990) and articles. Yet when I ask chatGPT to report on my work, it misrepresents my thinking. Further, when others use AI-generated information, they cite the AI misrepresentation (not knowing or reading original work), which further obfuscates ideas. Overcoming the risk of flawed information requires analytical thinking to vet the information provided.

  4. (False) emotion. AI often feigns emotional connection by asking questions to further discussion, using active listening to engage, and offering affirming responses to queries. Researchers have shown the risk of AI-exclusive counselling where clients form an emotional connection to chatbot therapists (Wdebot, Wysa, Tess). To avoid false emotion risk, Artificial Intelligence needs to be coupled with “Authentic Intimacy” experience emotional support — with compassion, care, and concern —from real human beings.

  5. Privacy. Information gained through engaging with AI can be and is stored. Just like Amazon knows a person’s lifestyle and habits through their purchases and Google through their searches, AI becomes a deep source of personal information about the user’s thinking through queries and engagement. The risk to data security needs to be managed by policies addressing confidentiality, integrity, access, and availability.

  6. Fake vs. real. AI can now produce reports, videos, images, and comments that appear real - even when they are fake. The percentage of bot-driven posts and comments is increasing dramatically on both X and Linkedin (estimated up to 50 percent). Reduce risk by using AI content detection tools (e.g., WinstonAI, GPTzero, Grammarly AI Detector) to determine whether a comment is bot- or person-generated. For example, I have discovered that some of the comments on my posts are “19% human” and more likely AI/bot generated, which informs my response (or lack thereof).

  7. Living backward and recycling. AI does an incredible job curating the past, but the past is not always a good prologue to the future. Because something worked or did not work in the past does not mean it will be effective going forward (as in the example of benchmarking above). Most GenAI reports on HR processes summarize what has been done, and agenticAI (bots) put these legacy processes into proposed solutions. Replacing the past with the future means knowing the past in order to not repeat or repackage it. Overcoming recycling risk and spiralling forward means creating new solutions that advance what has been done based on changing business context by coupling human and artificial intelligence.

  8. Accountability diffusion. Using AI to improve decision-making is a shared responsibility that includes experts in technology, finance, HR, legal, strategy, and marketing. These participants should form an AI governance committee to shape AI strategy, allocate resources, and set policy. However, they may lack clear ownership for progress. The accountability diffusion risk is reduced when this committee sets clear AI objectives, investments, and standards with metrics to ensure responsibility.

Manage AI Returns and Risks to Make Progress

As we coach HR leaders on how they can contribute to driving AI impact in the future,
we suggest the following:

  • Be an advocate of AI-enabled work. Have and help create in others a positive mindset about how AI provides information to improve decisions to deliver stakeholder value. Be an active contributor to groups assigned to AI governance.

  • Envision AI as an enabler for work and not just a replacement of people. Replace
    fear of loss with opportunity for progress.

  • Help navigate the paradoxes of AI by engaging the right people in the right
    conversations.

  • Model the proper use of AI for yourself and encourage its correct use for all HR
    work.

  • Include discussion of AI risks (such as those I’ve identified above) as part of enterprise risk management efforts.

  • Continually integrate technology and people. Don’t let AI replace but amplify IQ
    (intelligence quotient), EQ (emotional quotient), or SQ (social quotient) by continually encouraging human emotion, energy, and empathy as central to how work gets done.

  • Embed AI as an ongoing and integrated part of work, not a separate agenda.

  • What would you add?

Considering both returns and risks gives HR leaders and functions with an opportunity to make AI in HR an agenda worth pursuing.

Written by

Dave Ulrich, Rensis Likert Professor Emeritus, University of Michigan, Partner, The RBL Group, dou@umich.edu

Camilla Ellehave, Ph.d., Managing Partner, RBL Europe, Cellehave@rbl.net