A person standing in front of three arrows
Editorial

Beyond Regulation: How to Prepare for Ethical and Legal AI Use

4 minute read
Phil Lim avatar
By
SAVED
Master AI compliance and ethics with key strategies for today's business leaders.

Artificial intelligence (AI) has become a transformative power to organizations across all sectors, and it seems like everybody wants to get in on the action. AI can revolutionize how organizations operate, boosting efficiency, effectiveness and focus. However, as AI develops, organizations and their respective boards need to provide guidance on the use of ethical and legal AI.

Keeping a close eye on AI developments is even more crucial now, as the tech’s growth in businesses and society has caught the attention of regulators and governments worldwide. This heightened awareness has prompted significant actions to enhance the safety of AI applications in the past year.

Decoding the AI Risk Radar

The EU AI Act, the first-ever legal framework on AI, positions Europe as a global leader by addressing AI risks and mandating businesses to assess their AI systems’ risk levels and comply with specified regulations. Those risk categories include minimal, limited, high and unacceptable risk levels. Compliance with these regulations is imperative for organizations, as higher risk levels necessitate more stringent guardrails and controls to be implemented.

Drawing parallels with the General Data Protection Regulation (GDPR), the EU AI Act is also expected to significantly influence AI regulations in the United States.

States like California and New York are likely to adopt similar frameworks, ensuring a standardized approach to AI governance. This alignment helps streamline compliance efforts and sets a benchmark for ethical AI practices across different jurisdictions. As one example, President Biden issued an executive order calling for more detailed guidance when dealing with AI.

While these actions, regulations and guidelines are a step in the right direction, alone, they are not enough. Guidelines often fall short of fully addressing AI’s ethical and legal implications. This leaves businesses and boards vulnerable to legal trouble.

Related Article: Collaborative Governance is the Path to Globally Inclusive and Ethical AI

Steps to Ensure Responsible AI Use

AI systems are subject to existing laws, and the cost for not complying can be significant. For example, Europe's GDPR governs data collection and use. AI that relies on personal data for decisions needs to be GDPR-compliant to avoid hefty fines. This cost will likely rise as regulators emphasize their authority and intention to penalize noncompliant AI systems and applications.

To get prepared, proactively adopt these strategies:

1. Get AI-Savvy

Board members need to familiarize themselves with AI rules, regulations and compliance obligations, as they play a vital role in steering their organizations toward responsible AI practices.

Like many other areas of the business, a top-down approach to building a compliance culture helps achieve the buy-in from leadership that drives momentum for effective change. This includes the board and C-suite shoring up their knowledge base by taking AI ethics courses and staying updated with the latest developments in AI governance and compliance.

Executives should also cultivate a culture of curiosity and continuous learning, as staying informed about AI developments and being unafraid to ask probing questions are essential for effective governance.

2. Foster Collaboration

Bringing in external expertise and being open to new insights can significantly enhance the board’s ability to oversee AI strategies effectively.

Certainly, chief technology officers (CTOs) and chief information security officers (CISOs) bring valuable internal perspectives on AI. Their technical knowledge and familiarity with the organization’s systems and processes make them indispensable in identifying potential risks and opportunities associated with AI.

These internal experts play a critical role in ensuring that AI implementation aligns with the organization’s strategic goals and complies with existing regulations and standards. They can provide insights into the technical feasibility of AI projects, potential security vulnerabilities and the integration of AI systems with existing IT infrastructure.

However, relying solely on internal expertise can result in blind spots. External consultants provide an independent and objective perspective, helping organizations uncover risks and opportunities that may not be immediately apparent to internal teams. These consultants often have a broad view of industry trends and best practices, having worked with multiple organizations across different sectors. They can offer guidance on the latest developments in AI governance, compliance and ethical considerations, ensuring that the organization remains at the forefront of responsible AI use.

3. Consider AI Risks in Cybersecurity

Organizations should consider AI risks in the context of cybersecurity, IT security and overall enterprise risk management.

In a 2024 board directors survey, 36% identified generative AI as the most challenging issue for their boards to oversee, just ahead of cybersecurity. Thanks to AI, all the data stored by an organization is now a gold mine, making it a more sought-after target for threat actors. In other words, the integration of AI coupled with the high frequency of cyber incidents has only made cyber risk an even bigger challenge for board directors.

To help mitigate cybersecurity problems, executives need to carefully evaluate who they have on their staff. This may mean having to follow the previous suggestion of bringing in a security specialist such as a CISO. In fact, a separate 2024 report shows that 73% of boards already meet with their CISO or equivalent on cyber topics on at least a quarterly basis.

4. Balance AI Innovation With Governance and Compliance

C-level executives must actively drive AI innovation while ensuring strong governance and compliance. This involves setting realistic expectations, avoiding performative governance and ensuring that AI risk management is thorough and meaningful, all of which can be aided by investing in a governance, risk and compliance solution.

By balancing innovation with ethical considerations and fostering a culture of curiosity, even the upper level of an organization can harness the full potential of AI while ensuring responsible and sustainable practices.

Additionally, regular audits and evaluations are essential to ensure fairness and equity in AI decision-making processes. The deployment of AI should be guided by a strong ethical framework. The EU AI Act serves as a guiding framework, demonstrating that thoughtful regulation can drive both innovation and ethical standards, setting a precedent for global AI governance. Executives must also consider how AI will affect society, working to enhance accountability and transparency while using technology for the greater benefit.

Learning Opportunities

Related Article: AI at the Crossroads: Creativity, Ethics and Integration Challenges

Be Proactive, Stay Ahead

The impact of AI on businesses is undeniable, as it offers strategic avenues to be more efficient and competitive. As leadership integrates AI into their work and into their organizations, they must stay at least one step ahead of regulations and executive orders.

Regulating AI from a policy level is fundamental, but it is insufficient to guarantee AI’s ethical and safe application. The leadership team of businesses must take proactive steps to understand and manage AI risks, leveraging technology and best practices to stay ahead of regulatory requirements and protect their organizations’ reputations and assets.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Phil Lim

Phil Lim is a director of product management at Diligent, a technology company creating and leading the modern governance movement. Connect with Phil Lim:

Main image: New Africa on Adobe Stock
Featured Research