Innovation

Beyond Adoption: Leading the Responsible AI Evolution

Where is your organization on its generative artificial intelligence (AI) implementation journey?

When visiting with CEOs, AI experts, and change leaders, I hear the entire gamut. Some organizations have created full governance and approval processes for AI tool selection, while others have created change management strategies and teams to implement Copilot or ChatGPT. Some organizations are allowing certain departments to explore openly, and others, in more restricted, regulated industries, have said no to any generative AI tool in the workplace.

Yet statistics show that bring your own (BYO) AI and shadow AI are now the norms in organizations. Employees use these tools to learn, safeguard their jobs, and improve efficiencies, especially as resources are cut and challenges continue to grow.

We are in a trial and error period of AI, where new tools — shiny objects — are available every day. Leaders, employees, and consumers can easily get caught up in the noise and confusion, making little progress. This can lead to fatigue, overwhelm, and shutting down. Meanwhile, organizations risk losing trust, security, reputation, and differentiation because they aren’t prepared for AI or broader change.

AI implementation isn’t just about the efficiency, tools, and their adoption. CEOs know that, but many are stuck in the noise. They may be frozen by risks, trying to grasp control, reducing headcount to please shareholders, or feeling so behind that any action seems better than none. But delaying this is no longer an option. In fact, doing so will make it harder for your organization to stand out or even survive. These paths are destructive to the future of organizations because they aren’t building the foundation or envisioning how organizations will live in an AI-driven world.

AI is no longer something organizations simply adopt — it’s fundamentally reshaping how we work, make decisions, and interact with stakeholders. As AI becomes embedded in everyday business, communication and change leaders — who in many organizations are the same people — have a once-in-a-lifetime opportunity to guide their organizations through this ongoing evolution. Their role is essential in ensuring trust, transparency, and alignment with corporate values.

Even for the most experienced communication and change management professionals, this transformation feels different. Finding a way to get our arms around it for the short and long term is a challenge that requires collaboration and partnership.

Here are three areas where communication and change professionals can make a difference in helping their organizations thrive long-term in an AI-driven world.

Reframe the Conversation

We need to shift the conversation from AI adoption to recognizing AI as a fundamental change in how we work and make decisions. Instead of a one-time change initiative, AI is a continuous integration and cultural evolution. Communication and change professionals aren’t just supporting AI, but rather should act as an intricate piece in leading this evolution. Shifting from viewing AI as just a tool to embracing it as an integral part of our organizational DNA is essential for long-term success.

Define Responsible AI for Your Organization

According to the Responsible AI Institute, 74% of organizations lack a comprehensive, organization-wide approach to responsible AI. Part of this is the lack of understanding of what responsible AI means.

Responsible AI goes far beyond ethics and governance, and there are plenty of definitions floating around. For this article, let’s use the definition that I’ve worked to create with ChatGPT:

Responsible AI is the intentional and transparent integration of AI into business operations, decision-making, and customer experiences in ways that uphold organizational values, build stakeholder trust, and protect people from unintended harm.

What does that mean within your organization? Can you define responsible AI in a way that resonates with stakeholders and guides AI-driven decisions?

The first edition of the Six Global AI Guiding Principles for Responsible Communication from the Global Alliance for Public Relations and Communication Management is one resource for further defining responsible AI.* The organization is a federation of over 78 associations and universities worldwide, including IABC. While these guidelines overlay the ethical standards of its member organizations, they stand alone as a guide for our profession in an AI-driven world.

Other resources for defining responsible AI in your organization come from Rolls Royce’s The Aletheia Framework® and the Responsible AI Institute’s AI Policy Template.

Adapt the Right Change Models

If we’re moving from a one-time change initiative to a continuous integration and cultural evolution, you will likely need to adapt traditional change management models for this real-world scenario.

At Reputation Lighthouse, we consider various models to guide change projects, including the Association of Change Management (ACMP’s) Standards, Prosci’s ADKAR, Kotter's 8-Step Change Model, McKinsey’s 7-S Framework, and Dr. David Rock’s SCARF model (presented here by BiteSize Learning). Generally, you should come up with a combination of them all to create a good fit for the organization, culture, alignment, and project.

When it came to the uniqueness of responsible AI implementation, we created a specific model, the Reputation Lighthouse Responsible AI Compass. This model combines several change management approaches, including SCARF and ADKAR, but primarily follows the ACMP Standard. It reflects our experience in implementing organization-wide changes across business models, workflows, workforce reduction, mergers and acquisitions, brand development, crisis-driven change, and reputation insulation. 

Our Responsible Implementation Compass focuses on:

  • Vision
  • Organizational Readiness
  • Strategy and Planning
  • Selection
  • Preparation and Testing
  • Implementation
  • Measurement and Evaluation
  • Innovation and Growth

The above graphic showcases these areas, with foundational elements at the center that must flow throughout the entire process. The organization must be one that communicates, learns, relates, aligns, and listens — functions where communication professionals hold the keys.

When shifting AI from an operational upgrade to an ingrained cultural and strategic transformation, communication and change professionals should take the lead. The AI future is already here and it’s our role to guide it.

*This is the first edition, as any responsible AI guide requires continuous updates. The Global Alliance Guiding Principles, first approved in April 2024, will be refreshed through an AI Symposium, May 2025 in Italy.