undefined
Skip to main content
Luxembourg|en-LU

Add a bookmark to get started

Global Site
Africa
MoroccoEnglish
South AfricaEnglish
Asia Pacific
AustraliaEnglish
Hong Kong SAR ChinaEnglish简体中文
KoreaEnglish
New ZealandEnglish
SingaporeEnglish
ThailandEnglish
Europe
BelgiumEnglish
Czech RepublicEnglish
HungaryEnglish
IrelandEnglish
LuxembourgEnglish
NetherlandsEnglish
PolandEnglish
PortugalEnglish
RomaniaEnglish
Slovak RepublicEnglish
United KingdomEnglish
Middle East
BahrainEnglish
QatarEnglish
North America
Puerto RicoEnglish
United StatesEnglish
OtherForMigration
6 February 20254 minute read

Landmark AI framework sets new standard for tackling algorithmic bias

IEEE 7003-2024, “Standard for Algorithmic Bias Considerations"

The Institute of Electrical and Electronics Engineers (IEEE) recently released IEEE 7003-2024, “Standard for Algorithmic Bias Considerations,” a landmark framework designed to assist organizations in addressing bias in artificial intelligence (AI) and autonomous intelligent systems (AIS).

Published on January 24, 2025, this standard establishes processes to help define, measure, and mitigate algorithmic bias while promoting transparency and accountability throughout the AI system lifecycle.

The role of algorithmic bias in modern AI systems

AI systems are increasingly influencing critical decisions in the healthcare, employment, insurance, and financial services sectors, among others. While these technologies offer immense benefits, they also carry risks – including the risk of unintended bias (ie, bias that is not prescribed directly by the model) against individuals based on their protected characteristics, such as race or gender. Unintended bias in AI can stem from unrepresentative training datasets, poorly mapped decision criteria, insufficient monitoring during deployment, or model drift. Left unchecked, these biases can lead to systemic discrimination, reputational harm, and legal liability.

The IEEE 7003-2024 standard aims to mitigate these risks by providing a comprehensive framework for identifying and addressing algorithmic bias. It encourages organizations to adopt an iterative, lifecycle-based approach that considers bias from the system’s initial design to decommissioning.

In doing so, organizations may also be able to leverage elements of this framework in compliance with a growing number of US state and international legislative AI mandates, such as the requirement to identify and mitigate unintended bias in AI systems before significant harm can occur (see, eg, the EU AI Act and Colorado Consumer Protections for Artificial Intelligence Act).

Key insights from IEEE 7003-2024

Organizations seeking to comply with the new standard may consider the following steps:

1. Establishing a bias profile: The standard emphasizes the creation of a "bias profile" to document all considerations regarding bias throughout the system’s lifecycle. This information repository tracks decisions related to bias identification, risk assessments, and mitigation strategies.

2. Identifying stakeholders and assess risks: It is encouraged that companies identify stakeholders – both those who influence the system and those impacted by the system – early in the development process. Comprehensive risk assessments will account for the potential adverse impacts of bias on different groups of stakeholders and will be updated as the system evolves.

3. Ensuring data representation: Poor data quality is a leading cause of algorithmic bias. The standard calls for evaluating datasets to confirm they sufficiently represent all stakeholders, particularly marginalized groups. Organizations are encouraged to document decisions related to data inclusion, exclusion, and governance.

4. Monitoring for drift: Algorithmic systems are susceptible to "data drift" (ie, changes in the data environment) and "concept drift" (ie, shifts in the relationship between input and output). Continuous monitoring and retraining are important to ensure fairness over time.

5. Promoting accountability and transparency: To foster trust, organizations are encouraged to communicate the intended purpose, limitations, and acceptable use of their AI systems using documentation that is clear, accessible, and tailored to stakeholders – including end users and regulators.

Conclusion

IEEE 7003-2024 represents a significant step forward in the development of ethical AI practices. By following its guidelines, organizations can create AI systems that are not only innovative, but also fair, transparent, and aligned with societal values.

Proactive adoption of this standard may help businesses mitigate risks, foster accountability, and unlock the full potential of AI technologies in a way that can benefit all stakeholders.

How DLA Piper can help

DLA Piper is well positioned to assist organizations in navigating the complexities of detecting and mitigating unwanted bias in AIS, including the practices recommended by IEEE 7003-2024. With a multidisciplinary team of AI attorneys and data scientists, we can provide tailored guidance to help clients meet these new requirements.

Our services include:

  • Privileged bias testing and risk assessments: Conducting comprehensive evaluations of your AI systems to identify risks and areas of improvement

  • Stakeholder engagement and documentation: Assisting in identifying stakeholders, documenting bias considerations, and creating a robust "bias profile"

  • Data governance and testing: Developing frameworks to evaluate and improve data quality, representation, and fairness in AI systems

  • Ongoing monitoring and oversight: Establishing processes for continuous monitoring and retraining to ensure long-term compliance with ethical and regulatory standards

Whether you are building a new AI system or seeking to evaluate an existing one, our team is ready to guide you every step of the way, helping ensure your technology aligns with both industry best practices and societal values.