Intent
Accountability
Safety/Security
Transparency
Fairness
Effective AI Governance
Justifiability
Integrity
Equality
Impartiality
Explainabilty
Repeatability
User privacy
Threat protection
Data governance
Auditability
Compliance
Stakeholder focus and trust
A Holistic Approach to Effective Governance
Companies need to ensure that the processes and outputs of their AI system do not unwittingly discriminate against any group or individual. By achieving this, firms can reap reputational benefits, foster greater public trust, and minimize the external risks to their business.
Companies should ensure that their AI processes are explainable and repeatable. Not only does this facilitate compliance reviews and stakeholder trust, it also supports continued efforts to improve AI development and deployment.
Companies that establish robust capabilities in data governance, threat protection, and user privacy are better able to detect malicious incursions, thereby mitigating adverse outcomes, minimizing their legal liability, and maximizing the utility of their data.
By using data in a principled manner and verifying that AI design and implementation processes are ethically aligned and appropriate, businesses will be better positioned to manage risks and execute their internal review and oversight processes.
Companies should undertake rigorous audit and compliance assurance processes. Those that are mindful of the concerns of their various stakeholders — lawmakers, auditors, customers, business partners, shareholders, among others — will better build confidence, fulfill regulatory requirements, and avoid complications in the future.
Definition
Adhere to relevant laws and contribute to regulatory agenda
Why it matters
Fulfills ethical compliance standards
Allows the business and industry to play a role in shaping the AI regulatory agenda
Provide traceable and verifiable model outputs that can be tested both internally and externally, with simulated or real data inputs
Enables model assessment for bias, compliance, accuracy
Produces auditable system records — inputs, logic, outputs — to ensure adherence to auditing standards/criteria
Manage data assets in a holistic fashion to generate value from information
Ensures data accessibility, usability, integrity, and security
Maximizes utility of data
Guard AI decision engines from overt intrusion and indirect malicious inputs
Prevents unintended algorithmic outputs
Builds user confidence in the system’s ability to safely function as intended
Protect consumer privacy and restrict AI influence to the express purpose for which it is intended
Safeguards customer rights and builds trust and reputation
Minimizes legal liability
Generate predictable and reproducible outputs complemented by effective supervision and maintenance processes
Builds confidence in model output and reliability
Overcomes inherent trust issues and facilitates stakeholder acceptance
Produce explanatory diagnostics — inputs, intermediate factors, and outputs — that can be interpreted by developers, practitioners, and consumers; eliminate “black box” outputs
Facilitates internal compliance reviews
Builds consumer confidence and accelerates adoption
Enables continued improvement efforts
Minimize the likelihood/ occurrence of biased outcomes
Protects brand by mitigating algorithmic bias through internal and external oversight mechanisms
Promote equal access and similar opportunities for all individuals and groups
Mitigates the risk of disenfranchisement
Fosters public trust
Contributes to alleviating broader societal inequality
Ensure data is used in a responsible and appropriate manner
Prevents negative social outcomes and brand implications associated with improper harvesting and selling of data
Demonstrate that design and implementation processes, as well as the decision output, are aligned with expressed purpose
Provides assurance that decisions adhere to intended objectives and logic
Facilitates internal review and oversight
Enhances risk management for new and existing models
Implement stakeholder-centered policies with clear enforcement mechanisms
Prioritizes the collective benefit of all stakeholders — customers, shareholders, employees, partners, etc.
Requires a higher duty of care and disclosure to prevent improper outcomes — data expiration/use, facial recognition stipulations, etc.
explore more
Organizations should ensure that their AI processes are explainable and repeatable. Not only does this facilitate compliance reviews and stakeholder trust, it also supports continued efforts to improve AI development and deployment.
Imperative
Integrity of the AI system which has been purposefully designed, driven with effective oversight and agile governance, and maintained under vigilant supervision and execution
Overcome inherent trust issues with AI to realize full potential of AI systems
Build a complementary culture with stakeholders’ acceptance
Minimize the likelihood/occurrence of biased outcomes
Prioritizes the collective benefit of all stakeholders – customers, shareholders, employees, partners, etc.
Requires a higher duty of care and disclosure to prevent improper outcomes – data expiration/use, facial recognition stipulations, etc.
A holistic approach for effective governance