• Home
  • News
  • Breaking the bias: Why gender matters in standards
Breaking the bias: Why gender matters in standards featured image

Breaking the bias: Why gender matters in standards

By Anjana Susarla, Omura-Saxena Professor of Responsible AI, Michigan State University

Readers of this blog may be familiar with the term digital divide: the gap between those who can access computers and the Internet, and those who cannot. Now, with algorithms influencing almost every part of our lives, we must turn our attention to the new algorithmic divide.

Artificial intelligence (AI), built on the back of increasingly sophisticated algorithms, has become a principal component of the world’s accelerating digital transformation. AI can tackle problems in new ways. It enables a growing range of other new and emerging digital technologies.

At the same time, unfortunately, the in-built biases of today’s AI are shaping the future world of work in a way that could exacerbate long-standing gender inequalities. The assumptions and biases underlying any predictive algorithms are based entirely on its initial data inputs.

Men and women may use technologies in different ways. However, we lack gender-disaggregated data on real-life tech use. The resulting “data desert” often, effectively, results in the exclusion of women’s statistics, information and perspectives from the data sets that underpin algorithmic decision making.

Amid all the hype about the metaverse, Web 3.0, blockchains, digital currencies, and smart cities, these technologies remain informed by data deserts that fail to take women into account, with the risk ignoring the context of women’s participation in a wide range of economic activities.

Moreover, with AI increasingly automating the customer service roles typically filled by women, we could also see deeper gender wage gaps deepen in the age of AI.

Fostering equality

To foster equality in the world of work, decision-makers need to focus on key areas of science, technology, engineering, and mathematics (STEM) where men are over-represented and start consciously promoting women to roles that are not easily automated.

When it comes to ensuring fairness in predictive algorithms, the industry will need updated, gender-sensitive international technical standards to help foster inclusion.

According to a report on AI from the United Nations Educational, Scientific and Cultural Organization (UNESCO), fostering diverse teams is one way to build AI in a trustworthy manner. But standards-setting bodies also have work to do in terms of understanding “algorithmic harms.” Only by knowing how AI reinforces biases can we mitigate individual and societal harms.

Mortgage lending algorithms, for example, have been found to award less than half of available credit limits to female applicants, compared to male applicants with equivalent incomes and residential addresses. Similarly, insurance or housing discrimination has negatively affected women’s access to credit and borrowing opportunities, largely because AI training data did not correct for gender bias.

While firms may find AI efficient for mortgage processing, we need to ensure the algorithms they use don’t place undue burdens on distinct groups of individuals, including women.

Inclusive AI

Builders of “gender-smart AI” also need to understand potential algorithmic harms and must correct for biases or discriminatory outcomes. This must include practices for development of gender-inclusive AI and auditing algorithms with a gender lens. Standards-making bodies must create standards for inclusion, which must then be applied by governments and companies to ensure sustainable economic development.

Looking at the big picture, gender equality would reinforce the full spectrum of UN Sustainable Development Goals (SDGs). If we do not meet SDG 5, which relates to women’s equality, we certainly cannot achieve the others.

To break the bias hindering women’s equality in standardization and more broadly in the age of AI, women and men must work together to build responsible AI. That means making sure AI is fundamentally human-centred, inclusive, and based on standards that address bias.

This is the message I brought to the Women in Standardization Expert Group (WISE) event on 8 March, International Women’s Day, during the latest World Telecommunication Standardization Assembly organized by the International Telecommunication Union (ITU).

I was delighted to join International Gender Champion and ITU Secretary-General Houlin Zhao, as well as representatives from Australia, Cameroon, Tunisia, and the United States, to discuss the development of global technical standards, and AI more broadly, through a gender lens.

Following those discussions, I am more confident we can build AI that uses inclusive data, complemented by human agency and oversight, together with standards for gender inclusion to ensure those algorithms are applied equitably.

See the full webcast of the Women in Standardization Expert Group event.

Related content