Artificial intelligence is a technology issue—but it’s also a people issue, argues Sharon Doherty, chief people and places officer at Finastra, a London-based software provider with North American headquarters in Lake Mary, Florida.
Doherty spoke with StrategicCIO360 about why AI bias is a growing concern and how company leaders can address it.
Why do you think it’s important to be actively engaged in addressing the impacts of bias in artificial intelligence?
As affirmed by a recent KPMG report commissioned by Finastra earlier this year, artificial intelligence draws on historical data and patterns, which means that many algorithms suffer from racial and gender biases. As a result, when applied to areas like financial services, for example, this technology has the effect of exacerbating inequalities already present in our financial system. This is of course a technology problem that the industry is working hard to address.
However, it is also a human and social issue, because it impacts real people. As leaders who are responsible for ensuring an equitable work environment, it’s important that CHROs, CPOs and other leaders discuss and acknowledge the biases and gaps that exist within the technology and data that play an increasingly large role in all of our lives. And beyond that, as change-makers at our organizations, we need to talk about what business practices need to be addressed in order to mitigate these biases and gaps and so we can prevent them going forward.
What are some key takeaways for HR leaders as they begin to think about this as an issue that falls into their domain?
As HR leaders begin to think about how they can tackle the issue of AI bias, it’s important for them to keep in mind that there is synergy between this project and other critical human resource priorities. For example, hiring a diverse workforce also ensures a diverse approach to algorithmic training, which is important in the effort to minimize biases in AI.
Algorithms are taught by humans, and the best way to eliminate bias in our technology starts with the people who build it and the companies that hire them. HROs and CPOs already play a significant role in bridging gender and racial gaps through hiring, so as they are thinking about ways to tackle the impact of bias in AI, they can identify areas like hiring where new practices can have dual impacts.
What initiatives have you found to be successful in combating this?
Broadly, I’ve found that CHROs and CPOs are most impactful when they’re able to help their companies build and define a platform that provides a launching point for different initiatives that work to close different gaps in their industry. As an example, hackathons, company-sponsored events or competitions that focus on female talent in financial services, create an environment where companies can not only find, but also celebrate, diverse candidates pushing boundaries of innovation.
Speaking from personal experience, I have found hackathons to be a great method for enacting concrete change. For example, at Finastra’s 2020 virtual hackathon, the mission of our “hacking for good” event was to award projects that helped build an unbiased fintech future. We saw record engagement and registrations that ranged from startups to schools and universities. This underscores the importance of these kinds of events, because reaching young people at the start of, or even before, their careers is huge in creating a more diverse pipeline—which directly reduces the level of bias in AI.
What advice would you give to HR leaders who are thinking about rolling out their own initiatives?
I would urge HR leaders to remember that they are not in this fight alone. Eliminating bias in AI and technology will require a collective and collaborative effort from the technology and finance industries, as well as from the government. This is a society-wide shortcoming that we all have to address. CPOs and CHROs should work with other C-Suite executives to facilitate collaboration with regulators in multiple markets, and should call upon their broader industries to take action against the risks to society associated with AI bias.