by Jessica Kim-Schmid and Roshni Raveendhran
This recent article from HarvardBusinessReview caught our eye for a number of important reasons. First and foremost — for companies that are taking a skills-based hiring approach, AI resume screening and candidate sourcing go hand-in-hand. But as the kids say “obviously” there are many deeper issues to explore related to AI and hiring practices.
Summary.
Companies are struggling to hire and retain talent and AI tools have the potential to help. Across hiring, development, and retention, AI can help companies address pain points. However, it’s not as simple as plug and play — there are serious risks and drawbacks that companies need to consider if they’re going to incorporate AI into their talent management processes. In particular, they need to address low-trust in AI decision making, bias and ethical concerns, and legal risk.
To read the full article, visit HBR.com
Where AI Can — and Can’t — Help Talent Management
For more than a year now, organizations have struggled to hold onto talent. According to the U.S. Bureau of Labor Statistics, 4.2 million people voluntarily quit their jobs in August 2022. At the same time, there were 10.1 million job openings. Between the Great Resignation and more recent trends like “quiet quitting,” traditional approaches for winning talented workers haven’t always cut it in this fiercely competitive market.
An emerging wave of AI tools for talent management have the potential to help organizations find better job candidates faster, provide more impactful employee development, and promote retention through more effective employee engagement. But while AI might enable leaders to address talent management pain points by making processes faster and more efficient, AI implementation comes with a unique set of challenges that warrant significant attention.
Before leaders adopt these tools, they need to understand how and where AI might offer their company an edge, and how to anticipate and tackle core challenges in implementing AI for talent management.
Talent Management Pain Points and AI in Action
Talent management has three main phases: employee attraction, employee development, and employee retention. AI can help address pain points in each of these areas.
Employee Attraction
Finding and hiring the right workers can be labor intensive, inefficient, and subject to bias. Corporate recruiters create job postings, screen resumes, and schedule interviews — processes that can be time-consuming and lead to bottlenecks that increase time-to-hire and, ultimately, a loss of promising candidates. Biased language in job postings can also decrease applications from traditionally marginalized groups, including women and racial minorities, though manual screenings can also be fraught with implicit biases.
What’s more, companies often have inconsistent processes for matching candidates to job openings beyond the one they initially applied for, leading to wasted opportunities for both candidates and organizations looking to fill roles.
AI can help by creating more accurate job postings that are appropriately advertised to prospective candidates, efficiently screening applicants to identify promising candidates, and offering processes that attempt to check human biases. For example, the platform Pymetrics utilizes AI in candidate assessment tools that measure actual skill demonstration and reduces bias in the screening process as a result. The platform also redirects “silver medalist” candidates to other fitting job opportunities, saving recruiters time by automatically re-engaging promising applicants.
Employee Development
Offering workers ongoing learning and development opportunities is a key aspect of talent management. A key pain point in employee development is motivating employees and ensuring they have access to appropriate opportunities. Oftentimes, there is little information about these opportunities for employees, and organizations find it challenging to develop sufficiently high-quality content to keep up with employees’ learning and growth needs.
AI can offer real-time solutions to tackle these pain points. For example, EdApp — an AI-based learning management system — provides employees personalized learning recommendations based on performance and engagement analytics, allows HR leaders to create micro-learning content within minutes, and enables them to track learner progress and revise content based on analytical insights.
Employee Retention
Finally, there’s the question of how to ensure that the employees you hired and developed stick around. A critical aspect of this is employee engagement, or employees’ commitment to and connection with their organization. A recent Gallup survey shows that only 32% of the U.S. workforce, and 21% of the global workforce, feels engaged at work. Employers often struggle to improve employee engagement because accurate engagement metrics are hard to capture. They also struggle to prevent employee burnout and promote well-being.
Various AI tools can help capture employee engagement metrics accurately in real-time and create employee-focused solutions for promoting well-being. One example is Microsoft Viva + Glint, an employee experience platform that combines sentiment analysis with actual collaboration data to gauge employee engagement and well-being.
Where AI Tools Can Go Wrong — and How to Mitigate This Risk
AI-driven tools are not one-size-fits-all solutions, however. Indeed, AI can be designed to optimize for different metrics and is only as good as the objective it is optimized for. Therefore, to leverage AI’s full potential for talent management, leaders need to consider what AI adoption and implementation challenges they may run into. Below, we describe key challenges as well as research-based mitigation strategies for each.
Low Trust in AI-Driven Decisions
People may not trust and accept AI-driven decisions — a phenomenon known as algorithm aversion. Research shows that people often mistrust AI because they don’t understand how AI works, it takes decision control out of their hands, and they perceive algorithmic decisions as impersonal and reductionistic. Indeed, one study showed that even though algorithms can remove bias in decision-making, employees perceived algorithm-based HR decisions as less fair compared to human decisions.
Mitigation strategies include:
Fostering algorithmic literacy: One way to reduce algorithm aversion is to help users learn how to interact with AI tools. Talent management leaders who use AI tools for making decisions should receive statistical training, for instance, that can enable them to feel confident about interpreting algorithmic recommendations.
Offering opportunities for decision control: Research suggests that when people have some control over the ultimate decision, even if minimal, they are less averse to algorithmic decisions. Moreover, people are more willing to trust AI-driven decisions in more objective domains. Therefore, carefully deciding which types of talent management decisions should be informed by AI, as well as determining how HR professionals can co-create solutions by working with AI-driven recommendations, will be critical for enhancing trust in AI.
AI Bias and Ethical Implications
While AI can reduce bias in decision-making, AI is not entirely bias-free. AI systems are typically trained using existing datasets, which may reflect historical biases. In addition to the infamous Amazon AI tool that disadvantaged women applicants, other examples of bias in AI include sourcing algorithms that pointedly target an audience of 85% women for supermarket cashier positions and target an audience that was 75% Black for jobs at taxi companies. Given AI’s vulnerability to bias, applications of AI in talent management could produce outcomes that violate organizational ethical codes and values, ultimately hurting employee engagement, morale, and productivity.
Mitigation strategies include:
Creating internal processes for identifying and addressing bias in AI: To systematically mitigate bias in AI technologies, it is important to create internal processes based on how one’s organization defines fairness in algorithmic outcomes, as well as setting standards for how transparent and explainable AI decisions within the organization need to be. Leaders should also be cautious about setting fairness criteria that do not account for equity, particularly for vulnerable populations. To address this, leaders can consider including variables such as gender and race in algorithms and proactively set different criteria for different groups to address pre-existing biases.
Building diverse teams to design AI systems: Research indicates that more diverse engineering teams create less biased AI. By fostering diversity throughout AI design and implementation processes within their talent management function, organizations could draw on diverse perspectives to minimize AI bias.
Erosion of Employee Privacy
Organizations have deployed AI technologies to track employees in real-time. If implemented poorly, these tools can severely erode employee privacy and lead to increased employee stress, faster burnout, deteriorated mental health, and decreased sense of agency. Reports show that the Covid-19 pandemic has driven a huge uptick in employer adoption of these tracking technologies, with as more than 50% of large employers currently using AI tools for tracking.
Mitigation strategies include:
Being transparent about the purpose and use of tracking technology: Gartner Research reveals that the percentage of employees who are comfortable with certain forms of employer tracking has increased over the past decade. The increase in acceptance is much higher when employers explain the reasoning for tracking, growing from 30% to 50% when organizational leaders transparently discussed why these tools were being used.
Making tracking informational, not evaluative: Perhaps counter to intuition, recent research has discovered that employees are more accepting of tracking when it is conducted solely by AI without any human involvement. This work shows that technological tracking allows employees to get informational feedback about their own behavior without fear of negative evaluation. When tracking tools are deployed primarily for monitoring rather than to offer information to employees about their behaviors, they erode privacy and reduce intrinsic motivation. Therefore, the key consideration for leaders should be whether tracking can enhance informational outcomes for employees without causing evaluation concerns.
Potential for Legal Risk
According to the American Bar Association, employers could be held liable even for unintentional employment discrimination enacted by AI-driven systems. Additionally, the state, national, and international laws governing employers’ and employees’ AI-related rights and responsibilities are constantly evolving.
Mitigation strategies include:
Understanding current legal frameworks regulating AI use: While the current approach to AI regulation in the U.S. is still in early stages, the primary focus is on enabling accountability, transparency, and fairness of AI. The National AI Initiative Act (now a law) and Algorithmic Accountability Act of 2022 (pending) are two national level frameworks that have been initiated to regulate AI use in organizations. But states are currently at the forefront of enacting AI regulations, so it will be important for leaders to stay abreast with changing regulations especially when operating businesses at multiple locations.
Establishing a proactive risk management program: The wider policy landscape governing the use of AI for sensitive personnel decisions is still evolving. But organizations that hope to adopt AI tools to drive value in talent management should actively monitor pending legislation and create proactive risk management practices, such as designing AI systems with appropriate controls at various stages of the model development process.
. . .
Given the role that excellent talent management plays in maintaining competitiveness, especially in light of the Great Resignation, leaders should proactively consider how AI tools that target talent management pain points can drive impact. There are significant implementation challenges that need to be overcome to gain the full value that these tools can bring. Given these challenges, leaders should judiciously evaluate AI tools. They can make make managing talent easier and fairer, but it’s not as simple as plug and play — and if leaders want to get the most out of these tools, they need to remember that.