As a computer scientist who has been immersed in AI ethics for about a decade, I’ve witnessed firsthand how the field has evolved. Today, a growing number of engineers find themselves developing AI solutions while navigating complex ethical considerations. Beyond technical expertise, responsible AI deployment requires a nuanced understanding of ethical implications.
In my role as IBM’s AI ethics global leader, I’ve observed a significant shift in how AI engineers must operate. They are no longer just talking to other AI engineers about how to build the technology. Now they need to engage with those who understand how their creations will affect the communities using these services. Several years ago at IBM, we recognized that AI engineers needed to incorporate additional steps into their development process, both technical and administrative. We created a playbook providing the right tools for testing issues like bias and privacy. But understanding how to use these tools properly is crucial. For instance, there are many different definitions of fairness in AI. Determining which definition applies requires consultation with the affected community, clients, and end users.
In her role at IBM, Francesca Rossi cochairs the company’s AI ethics board to help determine its core principles and internal processes. Francesca Rossi
Education plays a vital role in this process. When piloting our AI ethics playbook with AI engineering teams, one team believed their project was free from bias concerns because it didn’t include protected variables like race or gender. They didn’t realize that other features, such as zip code, could serve as proxies correlated to protected variables. Engineers sometimes believe that technological problems can be solved with technological solutions. While software tools are useful, they’re just the beginning. The greater challenge lies in learning to communicate and collaborate effectively with diverse stakeholders.
The pressure to rapidly release new AI products and tools may create tension with thorough ethical evaluation. This is why we established centralized AI ethics governance through an AI ethics board at IBM. Often, individual project teams face deadlines and quarterly results, making it difficult for them to fully consider broader impacts on reputation or client trust. Principles and internal processes should be centralized. Our clients—other companies—increasingly demand solutions that respect certain values. Additionally, regulations in some regions now mandate ethical considerations. Even major AI conferences require papers to discuss ethical implications of the research, pushing AI researchers to consider the impact of their work.
At IBM, we began by developing tools focused on key issues like privacy, explainability, fairness, and transparency. For each concern, we created an open-source tool kit with code guidelines and tutorials to help engineers implement them effectively. But as technology evolves, so do the ethical challenges. With generative AI, for example, we face new concerns about potentially offensive or violent content creation, as well as hallucinations. As part of IBM’s family of Granite models, we’ve developed safeguarding models that evaluate both input prompts and outputs for issues like factuality and harmful content. These model capabilities serve both our internal needs and those of our clients.
While software tools are useful, they’re just the beginning. The greater challenge lies in learning to communicate and collaborate effectively.
Company governance structures must remain agile enough to adapt to technological evolution. We continually assess how new developments like generative AI and agentic AI might amplify or reduce certain risks. When releasing models as open source, we evaluate whether this introduces new risks and what safeguards are needed.
For AI solutions raising ethical red flags, we have an internal review process that may lead to modifications. Our assessment extends beyond the technology’s properties (fairness, explainability, privacy) to how it’s deployed. Deployment can either respect human dignity and agency or undermine it. We conduct risk assessments for each technology use case, recognizing that understanding risk requires knowledge of the context in which the technology will operate. This approach aligns with the European AI Act’s framework—it’s not that generative AI or machine learning is inherently risky, but certain scenarios may be high or low risk. High-risk use cases demand additional scrutiny.
In this rapidly evolving landscape, responsible AI engineering requires ongoing vigilance, adaptability, and a commitment to ethical principles that place human well-being at the center of technological innovation.
From Your Site Articles
Related Articles Around the Web