The technological landscape is undergoing a radical transformation, with generative AI at its epicentre. Since the unveiling of ChatGPT in November 2022, the milestones achieved have been nothing short of extraordinary. OpenAI introduced GPT-4 in March 2023; by May 2023, Anthropic’s generative AI, Claude, had amplified its prowess, processing an astounding 100,000 tokens of text—equivalent to an average novel—in just a minute, a sharp ascent from its original 9,000-token capability. Google has also showcased its innovations, launching the generative AI-driven Search Generative Experience and introducing PaLM 2, a state-of-the-art LLM to empower their Bard chatbot and a myriad of other tools.
These advancements mark an economic revolution. A 2023 McKinsey report provides a tantalising glimpse into the potential, indicating that generative AI could infuse between $2.6 trillion to $4.4 trillion annually into the global economy across various applications. To offer some context, the United Kingdom’s entire GDP in 2021 was $3.1 trillion. In the midst of this technological renaissance, industries, including the Human Resources (HR) sector, are positioned at a crossroads—poised to tap into the limitless potential of AI while also carefully navigating the challenges and nuances it introduces.
Promise and perils
Human Resource (HR) departments stand at the nexus of economic, legal, business, and cultural shifts. Navigating the ebb and flow of talent—managing surpluses during economic downturns and grappling with shortages in prosperous times—they adapt as societal norms evolve and laws transform.
In this modern era, many HR departments are harnessing the power of artificial intelligence (AI) to streamline recruitment, enhance employee experiences, and facilitate aspects like onboarding and benefits administration. However, with AI’s integration comes the heightened responsibility of guarding sensitive employee data, a task rendered complex given AI’s expansive data processing abilities which may inadvertently expose vulnerabilities. There are other nuances and risks, too, like potential biases in AI algorithms and concerns over transparency in automated decisions.
Given this landscape, pressing questions emerge: what are the comprehensive risks when weaving AI into HR processes? And how can organisations wield AI’s transformative potential judiciously, ensuring both innovation and the protection of all involved?
The data shared on open Generative AI models, such as ChatGPT, becomes accessible to a wider audience, including potential competitors. This openness means that queries posed by users contribute to the public data model, thereby heightening the risk of inadvertently sharing proprietary or confidential company information.
The double-edged sword
While AI promises to streamline recruitment processes, it’s paramount to remember that it doesn’t supersede human judgement; rather, it supplements it. It’s important to remember that the tool is only as impartial as its creator. Unconscious biases, stemming from say, an engineer’s beliefs, or even the predominant culture in a workspace, can inadvertently skew an AI’s parameters.
Consider a hypothetical scenario: if a developer, situated in a predominantly male engineering department, holds a latent belief that men are superior engineers, they might inadvertently program the AI to favour male candidates. This isn’t mere speculation.
In one such example, several sources complained that Amazon’s machine-learning algorithms discriminated against women. The system was trained on a decade’s worth of resumes, predominantly from male applicants. As a result, it inadvertently penalised mentions of “women” or “woman,” as well as references to women-only colleges or sororities. Simultaneously, it showed a preference for terms typically associated with male resumes, like “captured” and “executed.”
Although human intervention was present in the final stages of recruitment, the skewed funnel presented by the AI system hampered genuinely equitable hiring decisions. Reflecting on such episodes, companies like Amazon have since become more discerning in their reliance on AI for recruitment.
Responsible recruiting using AI
Fortunately, when the AI platform leverages machine learning, it can be “re-trained” or recalibrated to follow and even improve company processes. Here are a few steps you can follow to help ensure that AI is helping your company become more diverse and equitable:
- Supply equitable data — To promote unbiased hiring, input balanced data. For instance, provide the system with resumes from a diverse pool of candidates, ensuring that it doesn’t inadvertently lean towards characteristics predominantly associated with one gender.
- Regularly audit outcomes — Do you see less diversity in its recruiting processes than you’ve previously seen?
- Engage a diverse design team — Engaging a varied team for AI implementation can help in reducing inherent biases and ensuring a comprehensive approach.
- Continual feedback is key — When your AI makes a mistake, let it know. Conversely, when it identifies a stellar candidate, reinforce such decisions. This constant feedback loop will ensure that the system learns and adapts effectively.
Reducing attrition using AI
India’s annual attrition rate is about 20%, and the replacement process is both expensive and challenging. It can be difficult for humans to find warning signs that the company may not be a good fit for a prospect. On the other hand, a proficiently trained machine-learning system can preemptively detect these signs.
Yet, in the immediate context, the pivotal focus lies in accumulating the necessary data to enhance the workplace environment for the existing workforce. At Responsive (formerly known as RFPIO), we employ a myriad of metrics and data sources to gauge employee contentment, a prominent tool being our Employee Engagement Survey. The analytical prowess of AI can sift through these survey results, pinpointing areas that warrant enhancement.
Case in point, post the deployment of our Employee Engagement Survey, we managed to identify and rectify team-specific concerns. This not only bolstered retention rates but also amplified overall employee satisfaction. Furthermore, in heeding our employees’ feedback, we introduced a fresh suite of benefits.
For individual employees, AI can look for risk factors such as lower individual engagement, customer satisfaction scores, performance metrics, and absenteeism to flag employees who might need additional support.
AI and employee privacy
Privacy concerns are more daunting than ever and privacy breaches can be devastating for both victims and businesses. With AI’s capability to effortlessly access sensitive data like tax ID numbers, addresses, performance records, compensation, and more, vigilance becomes paramount.
Moreover, the data shared on open Generative AI models, such as ChatGPT, becomes accessible to a wider audience, including potential competitors. This openness means that queries posed by users contribute to the public data model, thereby heightening the risk of inadvertently sharing proprietary or confidential company information.
Consequently, it becomes imperative for businesses to harness secure, private AI models, ensuring the utmost protection of all recruitment-related and employee-specific data.
Paradoxically, when wielded appropriately, AI can emerge as a robust shield for privacy, often outperforming human efforts. Consider the following strategies:
- End-to-end encryption for all employee data — This will ensure that even in the eventuality of a breach, the data remains indecipherable to unauthorised entities.
- Anonymise data where possible — When data can’t be anonymised, assign pseudonyms to mask the original data.
- Conduct regular security audits — Address weaknesses and vulnerabilities before they manifest into tangible threats.
- Train employees — Teach them the types of data that are being collected and how they can protect both their data and the data belonging to customers. Evolve the training content in alignment with emergent risks and updated protocols.
While AI promises to streamline recruitment processes, it’s paramount to remember that it doesn’t supersede human judgement; rather, it supplements it. It’s important to remember that the tool is only as impartial as its creator. Unconscious biases, stemming from say, an engineer’s beliefs, or even the predominant culture in a workspace, can inadvertently skew an AI’s parameters
Should the government regulate AI?
While the rapid evolution of AI presents challenges for regulatory bodies, a few governments have been proactive in setting up frameworks to ensure responsible and ethical AI deployment. The European Union stands out, having already implemented a set of rules, setting a precedent that other global entities may soon adopt. As the integration of AI permeates into HR processes, there are specific measures governments can contemplate to ensure ethical and transparent practices:
- Address biases — Governments should set forth clear guidelines and standards for fairness in hiring.
- Transparency protocols — When individuals interact with an AI system, it should be made transparent to them. Whether it’s a chatbot during a customer support interaction or a recruitment tool, users should be aware that they are communicating with an AI entity.
- Mandate privacy and data protection — Not only should companies encrypt data along with other safeguards, but they should also let people know how their data is being collected and used.
- Hold designers and business users accountable — If a designer or business uses irresponsible data collection and usage, they should face repercussions.
- Routine audits — Regular audits of AI systems should be mandated, not just when there’s an evident issue but also as a preventive measure. These audits would serve as a check on the systems, ensuring they operate within defined ethical and legal boundaries.
The future of AI
The landscape of HR is undeniably undergoing a seismic shift. As AI permeates this domain, its potential to foster inclusivity and revolutionise workplace dynamics is immense. Yet, its incorporation is not a one-size-fits-all solution. The efficacy of AI in HR hinges not merely on its technological prowess but on its judicious deployment. It promises unparalleled efficiency, but its realisation demands conscientious effort, continuous learning, and robust regulatory oversight. As for whether every business should consider adding AI to their HR teams, there’s no clear answer. The future beckons, but it’s upon businesses to tread with both ambition and caution.
About the author: Shipra Kamra is the Head of People Operations at Responsive. With nearly 20 years in Human Resources, she has honed her expertise at eminent companies like Nike, Target, and CDK Global. Shipra excels in reshaping organisational strategies, talent management, and leadership development.
Year of Incorporation: 2015
Business line: Responsive (formerly RFPIO) is the global leader in strategic response management software, transforming how organizations share and exchange critical information.