January 25, 2024 By Jennifer Kirkwood 5 min read

Growing up, my father always said, “do good.” As a child, I thought it was cringeworthy grammar and I would correct him, insisting it should be “do well.” Even my children tease me when they hear his “do good” advice and I’ll admit I let him have a pass on the grammar front.

In the case of responsible artificial intelligence (AI), organizations should prioritize the ability to avoid harm as a central focus. Some organizations may also aim to use AI for “doing good.” However, sometimes AI requires clear guardrails before one can agree with “good.”

Read the “Presidio AI Framework” paper to learn how to address generative AI risks with guardrails across the expanded AI life cycle

As generative AI continues to go mainstream, organizations are excited about the potential to transform processes, reduce costs and increase business value. Business leaders are eager to redesign their business strategies to better serve customers, patients, employees, partners or citizens more efficiently and improve the overall experience. Generative AI is opening doors and creating new opportunities and risks for organizations globally, with human resources (HR) leadership playing a key role in managing these challenges.

Adapting to the implications of increased AI adoption could include complying with complex regulatory requirements such as NIST, the EU AI Act, NYC 144, US EEOC and The White House AI Act, which directly impact HR and organizational policies, as well as social, job skilling and collective bargaining labor agreements. Adopting responsible AI requires a multi-stakeholder strategy as affirmed by top international resources including NIST, OECD, the Responsible Artificial Intelligence Institute, the Data and Trust Alliance and IEEE.

This is not just an IT role; HR plays a key role

HR leaders now advise businesses about the skills required for today’s work as well as future skills, considering AI and other technologies. According to the WEF, employers estimate that 44% of workers’ skills will be disrupted in the next 5 years. HR professionals are increasingly exploring their potential to improve productivity by augmenting the work of employees and empowering them to focus on higher-level work. As AI capabilities expand, there are ethical concerns and questions every business leader must consider so their AI use does not come at the expense of workers, partners or customers.

Learn the principles of trust and transparency recommended by IBM for organizations to responsibly integrate AI into their operations.

Worker education and knowledge management are now tightly coordinated as a multi-stakeholder strategy with IT, legal, compliance and business operators as an ongoing process, as opposed to a once-a-year check box. As such, HR leaders need to be innately involved in developing programs to create policies and grow employees’ AI acumen, identifying where to apply AI capabilities, establishing a responsible AI governance strategy and using tools like AI and automation to help ensure thoughtfulness and respect for employees through trustworthy and transparent AI adoption. 

Challenges and solutions in adopting AI ethics within organizations

Although AI adoption and use cases continue to expand, organizations may not be fully prepared for the many considerations and consequences of adopting AI capabilities into their processes and systems. While 79% of surveyed executives emphasize the importance of AI ethics in their enterprise-wide AI approach, less than 25% have operationalized common principles of AI ethics, according to IBM Institute for Business Value research.

This discrepancy exists because policies alone cannot eliminate the prevalence and increasing use of digital tools. Workers’ increasing usage of smart devices and apps such as ChatGPT or other black box public models, without proper approval, has become a persistent issue and doesn’t include the correct change management to inform workers about the associated risks. 

For example, workers might use these tools to write emails to clients using sensitive customer data or managers might use them to write performance reviews that disclose personal employee data. 

To help reduce these risks, it may be useful to embed responsible AI practice focal points or advocates within each department, business unit and functional level. This example can be an opportunity for HR to drive and champion efforts in thwarting potential ethical challenges and operational risks.

Ultimately, creating a responsible AI strategy with common values and principles that are aligned with the company’s broader values and business strategy and communicated to all employees is imperative. This strategy needs to advocate for employees and identify opportunities for organizations to embrace AI and innovation that push business objectives forward. It should also assist employees with education to help guard against harmful AI effects, address misinformation and bias and promote responsible AI, both internally and within society.

Top 3 considerations for adopting responsible AI

The top 3 considerations business and HR leaders should keep in mind as they develop a responsible AI strategy are:

Make people central to your strategy

Put another way, prioritize your people as you plot your advanced technology strategy. This means identifying how AI works with your employees, communicating specifically to those employees how AI can help them excel in their roles and redefining the ways of working. Without education, employees could be overly worried about AI being deployed to replace them or to eliminate the workforce. Communicate directly with employees with honesty about how these models are built. HR leaders should address potential job changes, as well as the realities of new categories and jobs created by AI and other technologies.

Enable governance that accounts for both the technologies adopted and the enterprise

AI is not a monolith. Organizations can deploy it in so many ways, so they must clearly define what responsible AI means to them, how they plan to use it and how they will refrain from using it. Principles such as transparency, trust, equity, fairness, robustness and the use of diverse teams, in alignment with OECD or RAII guidelines, should be considered and designed within each AI use case, whether it involves generative AI or not. Additionally, routine reviews for model drift and privacy measures should be conducted for each model and specific diversity, equity and inclusion metrics for bias mitigation.

Identify and align the right skills and tools needed for the work

The reality is that some employees are already experimenting with generative AI tools to help them perform tasks such as answering questions, drafting emails and performing other routine tasks. Therefore, organizations should act immediately to communicate their plans to use these tools, set expectations for employees using them and help ensure that the use of these tools aligns with the organization’s values and ethics. Also, organizations should offer skill development opportunities to help employees upskill their AI knowledge and understand potential career paths.

Download the “Unlocking Value from Generative AI” paper for more guidance on how your organization can adopt AI responsibly

Practicing and integrating responsible AI  into your organization is essential for successful adoption. IBM has made responsible AI central to its AI approach with clients and partners. In 2018, IBM established the AI Ethics Board as a central, cross-disciplinary body to support a culture of ethical, responsible and trustworthy AI. It is comprised of senior leaders from various departments such as research, business units, human resources, diversity and inclusion, legal, government and regulatory affairs, procurement and communications. The board directs and enforces AI-related initiatives and decisions. IBM takes the benefits and challenges of AI seriously, embedding responsibility into everything we do.

I’ll allow my father this one broken grammar rule. AI can “do good” when managed correctly, with the involvement of many humans, guardrails, oversight, governance and an AI ethics framework. 

Watch the webinar on how to prepare your business for responsible AI adoption Explore how IBM helps clients in their talent transformation journey
Was this article helpful?
YesNo

More from Artificial intelligence

AI transforms the IT support experience

5 min read - We know that understanding clients’ technical issues is paramount for delivering effective support service. Enterprises demand prompt and accurate solutions to their technical issues, requiring support teams to possess deep technical knowledge and communicate action plans clearly. Product-embedded or online support tools, such as virtual assistants, can drive more informed and efficient support interactions with client self-service. About 85% of execs say generative AI will be interacting directly with customers in the next two years. Those who implement self-service search…

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

5 min read - As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts. One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with…

Chat with watsonx models

3 min read - IBM is excited to offer a 30-day demo, in which you can chat with a solo model to experience working with generative AI in the IBM® watsonx.ai™ studio.   In the watsonx.ai demo, you can access some of our most popular AI models, ask them questions and see how they respond. This gives users a taste of some of the capabilities of large language models (LLMs). AI developers may also use this interface as an introduction to building more advanced…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters