A Taxonomy of AI Values
Understanding the values that guide AI development is crucial for creating systems that benefit humanity while minimizing potential harms.
In today's rapidly evolving AI landscape, understanding the values that guide AI development is crucial for creating systems that benefit humanity while minimizing potential harms. This taxonomy provides a structured view of the values essential for responsible AI.
Understanding AI Value Taxonomies
AI value taxonomies serve as structured frameworks for classifying and organizing the ethical principles, human values, and normative considerations that should guide artificial intelligence development and deployment.
These taxonomies have emerged as critical tools for ensuring that AI systems align with human priorities across diverse contexts. As AI becomes increasingly integrated into everyday life, these classification systems help developers, policymakers, and users understand and implement values-based approaches to technology.
The development of AI value taxonomies represents a multidisciplinary effort spanning ethics, computer science, philosophy, and social sciences. Researchers have approached the task of classifying AI values from various perspectives, yielding taxonomies that range from broad ethical frameworks to detailed technical requirements.
Why AI Value Taxonomies Matter
Value taxonomies provide several crucial benefits for the AI ecosystem:
They create a common language for discussing ethical considerations across different domains and applications
They help identify gaps or imbalances in how values are represented in AI systems
They facilitate the operationalization of abstract principles into concrete technical implementations
They support alignment between human expectations and AI behaviors
They enable more systematic evaluation of AI systems against ethical standards
Major AI Value Taxonomies
Human-Centered Value Frameworks
Research from Purdue University has identified a taxonomy of human values that should be represented in AI training data, including:
Well-being and peace
Information seeking
Justice, human rights, and animal rights
Duty and accountability
Wisdom and knowledge
Civility and tolerance
Empathy and helpfulness
Their analysis of training datasets used by leading AI companies revealed significant imbalances, with wisdom, knowledge, and information-seeking values predominating, while justice, human rights, and animal rights were least represented.
This imbalance can create "blind spots" in AI systems that may affect how they respond to queries related to underrepresented values.
Normative Ethical Principles Taxonomy
A comprehensive survey of AI and computer science literature has yielded a taxonomy of 21 normative ethical principles that can be operationalized in AI systems.
This taxonomy emerged from examining how ethical principles derived from moral philosophy can be methodically integrated into AI reasoning capacities.
The researchers categorized these principles into eleven key areas:
Deontology
Egalitarianism
Proportionalism
Consequentialism
Justice
Virtue ethics
Care ethics
Utilitarianism
Contractarianism
Libertarianism
Religious ethics
This macro-ethics approach views ethical considerations through a holistic lens that incorporates social context, enabling more nuanced ethical reasoning in AI systems.
Empirical Taxonomy from Real-World Interactions
Anthropic has developed what they describe as "the first large-scale empirical taxonomy of AI values" by analyzing values expressed in real-world conversations between AI systems and users.
Their research focused on discovering how values manifest in actual interactions rather than starting with predetermined theoretical frameworks.
This approach acknowledges that AI systems make value judgments in many contexts, such as:
Providing advice on childcare (balancing safety versus convenience)
Suggesting approaches to workplace conflicts (emphasizing assertiveness versus harmony)
Helping draft apologies (focusing on accountability versus reputation management)7
AI Use Taxonomy
The National Institute of Standards and Technology (NIST) has developed an AI Use Taxonomy that classifies how AI systems contribute to human-AI tasks. This taxonomy identifies 16 AI use "activities" independent of specific AI techniques or application domains.
The taxonomy aims to:
Provide common terminology for describing outcome-based human-AI activities
Enable cross-domain insights
Highlight commonalities in measurement needs
Facilitate use case development
Support evaluation of trustworthiness and usability
This approach focuses on decomposing complex human-AI tasks into activities that remain relevant despite rapid technological advancement or application in new domains.
Trustworthiness Taxonomy
A particularly detailed taxonomy developed for AI trustworthiness includes 150 properties organized around key characteristics defined by NIST:
Valid and reliable
Safe
Secure and resilient
Accountable and transparent
Explainable and interpretable
Privacy-enhanced
Fair with harmful bias managed
This taxonomy provides a framework for implementing trustworthiness across the AI lifecycle and complements regulatory efforts like the EU AI Act by offering voluntary considerations for AI stakeholders.
Organizational Approaches to AI Values
OECD AI Principles
The Organisation for Economic Co-operation and Development (OECD) has established principles for responsible AI development based on five key pillars:
Inclusive growth, sustainable development, and human well-being
Respect for rule of law, human rights, and democratic values (including fairness and privacy)
Transparency and explainability
Robustness, security, and safety
Accountability
These principles, adopted in 2019 and updated in 2024, aim to enhance human capabilities while protecting privacy and ensuring decisions remain under meaningful human oversight.
EU Ethics Guidelines for Trustworthy AI
The European Union has developed guidelines stating that trustworthy AI should be:
Lawful - respecting applicable laws and regulations
Ethical - respecting ethical principles and values
Robust - from both technical and social perspectives
These guidelines identify seven key requirements:
Human agency and oversight
Technical robustness and safety
Privacy and data governance
Transparency
Diversity, non-discrimination, and fairness
Environmental and societal well-being
Accountability
Corporate AI Value Frameworks
Major technology companies have developed their own AI value frameworks. Google's AI principles, for example, are organized around three main themes:
AI that assists, empowers, and benefits humanity
Responsible AI development throughout the lifecycle
Tools that empower others to harness AI for individual and collective benefit
Microsoft similarly emphasizes six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
Implementing AI Value Taxonomies
Creating Effective AI Guiding Principles
When organizations develop AI guiding principles, experts recommend they should be:
Positive and actionable - stating what should be done rather than what shouldn't
Distinct - clearly differentiated from each other
Memorable - easy to recall and apply
Contextual - relevant to the organization's specific needs and circumstances
These principles serve as a compass for navigating the ethical terrain of AI use, helping organizations avoid data privacy and security risks while developing AI responsibly.
Value Alignment Process
The process of aligning AI systems with human values involves multiple stages:
Identifying core human values across cultural contexts
Embedding these values at every development stage
Translating abstract principles into technical guidelines
Ensuring systems remain auditable and transparent
Continuously monitoring and updating to adapt to evolving societal norms
This process acknowledges that human values vary across cultures and contexts, requiring AI systems to be tailored to specific cultural, legal, and societal frameworks.
Challenges and Future Directions
Cultural and Contextual Variations
A significant challenge in developing comprehensive AI value taxonomies is accounting for cultural variations in how values are interpreted and prioritized. For example, the concept of privacy may be understood differently across regions, with some cultures emphasizing individual privacy while others prioritize collective security.
Operationalizing Abstract Values
Translating abstract ethical principles into concrete technical implementations remains challenging. Researchers continue to explore methodologies for incorporating normative ethical principles into the reasoning capacities of responsible AI systems.
Empirical Validation
While theoretical taxonomies provide useful frameworks, there's growing recognition of the need for empirical approaches that observe AI values "in the wild" through real-world interactions. This helps verify whether trained values actually manifest in practice.
Evolving Regulatory Landscapes
As regulatory frameworks like the EU AI Act develop, taxonomies of AI values will need to evolve to maintain alignment with legal requirements while providing practical guidance for implementation.
Conclusion
AI value taxonomies provide essential frameworks for ensuring artificial intelligence systems operate in alignment with human values, ethical principles, and societal needs. From broad ethical frameworks to detailed technical requirements, these taxonomies offer structured approaches to the complex challenge of developing responsible AI.
As the field continues to evolve, taxonomies that combine theoretical principles with empirical observation will be particularly valuable.
The most effective frameworks will likely be those that acknowledge cultural variations while identifying universal principles, and that provide practical guidance for implementation while remaining adaptable to technological and societal changes.
By developing and refining these taxonomies, we can work toward AI systems that truly serve humanity's best interests and reflect our shared values.
Thanks for reading.
Samet Ozkale
Citations:
https://theconversation.com/ai-datasets-have-human-values-blind-spots-new-research-246479
https://aiethicslab.rutgers.edu/glossary/oecd-ai-principles/
https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
https://wawiwa-tech.com/blog/ai-taxonomy-making-sense-of-artificial-intelligence
https://cltc.berkeley.edu/wp-content/uploads/2023/01/Taxonomy_of_AI_Trustworthiness.pdf
https://theodi.org/news-and-events/blog/a-data-for-ai-taxonomy/
https://www.boozallen.com/insights/ai-research/ai-guiding-principles.html
https://genai.byu.edu/00000191-484f-db4d-a995-e9cf5a1d0001/guiding-principles-of-ai
https://www.linkedin.com/pulse/value-taxonomies-training-ai-taxonomies
https://www.ibm.com/think/topics/classification-machine-learning
https://ojs.aaai.org/index.php/AIES/article/download/31703/33870/35767
https://cltc.berkeley.edu/publication/a-taxonomy-of-trustworthiness-for-artificial-intelligence/
https://gta.georgia.gov/policies-and-programs/artificial-intelligence/guiding-principles-ai
https://www3.weforum.org/docs/WEF_AI_Value_Alignment_2024.pdf
https://montrealethics.ai/mapping-the-ethics-of-generative-ai-a-comprehensive-scoping-review/
https://research-information.bris.ac.uk/files/360385530/2208.12616v1.pdf
https://assets.anthropic.com/m/18d20cca3cde3503/original/Values-in-the-Wild-Paper.pdf
https://wawiwa-tech.com/blog/ai-taxonomy-making-sense-of-artificial-intelligence
https://theodi.org/news-and-events/blog/a-data-for-ai-taxonomy/
https://huggingface.co/datasets/Anthropic/values-in-the-wild
https://www.oecd.org/en/topics/sub-issues/ai-principles.html
https://standards.ieee.org/initiatives/autonomous-intelligence-systems/
https://ansi.org/standards-news/all-news/2024/05/5-9-24-oecd-updates-ai-principles
https://ai4people.org/PDF/AI4People_Ethical_Framework_For_A_Good_AI_Society.pdf
https://librarylearningspace.com/ieee-introduces-free-access-to-ai-ethics-and-governance-standards/
https://www.raspberrypi.org/blog/experience-ai-unesco-ai-competency-framework/
https://cacm.acm.org/research/the-eu-ai-act-and-the-wager-on-trustworthy-ai/
https://standards.ieee.org/wp-content/uploads/import/documents/other/ead_brochure_v2.pdf
https://unsceb.org/principles-ethical-use-artificial-intelligence-united-nations-system
https://www.ethics.org/wp-content/uploads/Ethically-Aligned-Design-May-2019.pdf
https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
https://paulwagle.com/ethical-ai-ieees-ethically-aligned-design-principles/
https://www.linkedin.com/pulse/unlocking-business-value-different-categories-ai-leon-paaijmans-llm
https://cltc.berkeley.edu/wp-content/uploads/2023/01/Taxonomy_of_AI_Trustworthiness.pdf
https://standards.ieee.org/products-programs/icap/ieee-certifaied/
https://standards.ieee.org/industry-connections/activities/ieee-global-initiative/
https://onlinelibrary.wiley.com/doi/10.1002/9781119815075.ch45
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
https://www.nist.gov/document/eu-us-terminology-and-taxonomy-artificial-intelligence-second-edition
https://www.paloaltonetworks.com/cyberpedia/ieee-ethically-aligned-design