Beneficence |
- AI must be beneficial to humanity, society, and the environment
- AI should promote inclusive growth, sustainable development, and well-being
- AI should respect, preserve, or even increase human dignity
- Strategies for promoting beneficence include:
- Aligning AI with human values
- Minimizing power concentration
- Using power for the benefits of human rights
- Working more closely with affected and underrepresented groups
- Minimizing conflict of interests
- Responding to customer demand and feedback
- Developing new metrics and measurements for human well-being
|
Non-maleficence |
- AI must prevent harm and not infringe on privacy or undermine security
- Researchers need to be aware of negative impacts and take steps to mitigate them
- Developers must control risk and improve the robustness and reliability of systems to ensure data privacy, safety, and security
- Need for risk-management strategies to prevent the intentional misuse via cyberwarfare and malicious hacking
- Establish harm prevention guidelines
- Harm prevention guidelines focus primarily on technical measures and government policies and strategies
- Concerns about multiple or dual-use, with specific positions against military application and the dynamics of an “arms race”
|
Privacy |
- AI must see privacy as both a value to uphold and as a right to be protected
- Three modes of protecting privacy
- Technical solutions such as differential privacy, privacy by design, data minimalization, and access control
- More research and awareness
- Regulatory approaches through government regulation and legal compliance, certificates of compliance, adaptation of laws and regulations to accommodate the specificities of AI
|
Freedom and Autonomy |
- AI must protect and enhance autonomy, ability to make decisions, and choose between alternatives
- AI needs to serve humanity by conforming to human values, including freedom, fairness, and autonomy
- Positive freedom for users/consumers
- Freedom to flourish
- Freedom to withdraw consent
- Freedom to use preferred platform or technology
- Negative freedom for users/consumers
- Freedom from technological experimentation, manipulation, or surveillance
- Freedom and autonomy are believed to be protected through transparency and predictability of AI
|
Trust |
- Need to understand the importance of customers’ trust
- Need for trustworthy
- AI research and technology
- AI developers, companies, and organizations
- Trustworthy design principles
- Need to build and sustain a culture of trust through
- Education
- Reliability
- Accountability
- Processes to monitor and evaluate the integrity of AI systems over time
- Tools and techniques to ensure compliance with norms and standards
- Multiple stakeholder dialogues
|
Justice, Fairness, and Equity |
- AI must promote prosperity, justice, and fairness for all
- AI must be equitable, diverse, and inclusive and benefit as many people as possible
- AI must be accessible to do different groups
- Need to acquire and process accurate, complete, and diverse data, especially training data
|
Sustainability |
- Closely related to beneficence
- AI needs to create sustainable systems that process data sustainability and whose insights remain valid over time
- AI systems should be designed, deployed, and managed to increase energy efficiency and minimize their ecological footprint
- AI needs to be developed and deployed AI to:
- Protect the environment
- Improve the planet’s ecosystem and biodiversity
- Contribute to fairer and more equal societies
- Promote peace
- To make future developments sustainable, corporations need to create policies ensuring accountability for job losses and
|
Solidarity |
- Need for a robust social safety net to protect the labor market
- Need for a redistribution of the benefits of AI in order not to threaten social cohesion
- Warning against data collection and practices that may undermine solidarity in favor of “radical individualism”
|
Transparency and Explicability |
- AI systems must be understandable, explainable, explicable, predictable, and transparent
- Need for increased disclosure of information by developers and those deploying the system
- Communication in non-technical terms about data use, source codes, the evidence base for AI use, and the limitations
- Audits, oversights, whistleblowing
|
Feedback/Errata