AI will affect everyone — it can’t be created by a select few

Philip Ellis
Creation: Being Human
5 min readSep 4, 2018

--

The third of our contributions by writer and journalist, Philip Ellis

AI is going to irrevocably change how we live and work. But these world-changing systems are currently being developed by a specific set of people with their own specialist skills, their own perspectives, and unavoidably, their own unconscious biases. How do we ensure that AI serves everyone?

We’re already beginning to see AI systems exhibit their own version of bias, from Siri struggling to understand accents to the especially egregious example of Google Image misidentifying the faces of black individuals as gorillas. One obvious problem here is that while machines are being fed vast amounts of data, this often only represents a certain portion of the population (not to mention the engineers and data scientists working on these programmes are largely white and male).

Even the way AI is presented could be said to have roots in unconscious bias; why does the default mode for so many digital assistants appear to be female? Noted futurist Tracey Follows speculates that this stems from assumptions we make as a society about clerical work; namely that it is gendered labour. “In their development, some companies would carry out research with human PAs — who were women — so it’s no surprise that we mostly refer to digital assistants as ‘she’, that they have female voices, and are gendered,” she says.

The gendering of virtual assistants comes with all kinds of worrying ramifications about how we, as humans, treat machines which we consider female. If a child does not differentiate between a machine voice and a human voice, and feels no compunction in barking orders at Alexa, how will that affect the way they interact with women as they grow up? Will they internalise the 1950s-esque idea that to be female is to be subservient, attentive, and domestically occupied? It was only this year that pushback from concerned consumers prompted developers to make Alexa more assertive and less responsive to sexism from users.

These are all pertinent concerns, considering the thousands of chatbots that look set to enter the market with the mass automation of customer service. “If we don’t change course, this next generation of conversational AIs will be created by the same people who built the current sexist algorithms and scripts — but on an exponentially bigger scale,” warns LivePerson CEO Robert LoCascio. “The engineers whose AI systems categorised women into kitchen and secretarial roles while offering men jobs with executive titles will have their biases massively amplified, as conversational AI goes global.”

At present, the diversity in AI conversation revolves around race and gender, but it goes beyond that, into diversity of thinking. Digital sociologist Lisa Talia Moretti posits that it’s not more data scientists that we need, but social scientists. “Data scientists don’t look at data through a cultural lens,” she says. “On the other side are scientists trained in psychology, sociology, ethnography; these people don’t come with any data science background. We need to find a common language.”

We have evolved as a species and as a society over millennia, and the myriad complex ways in which we interact and express ourselves transcend simple numbers, queries and searches — the humanities and social sciences will be integral in enabling machines to anticipate the near-infinite quirks we demonstrate on a daily basis. One of the reasons that AIs are exhibiting bias is their inability to identify the context surrounding imagery and language, says Moretti, because the people training the machines are inputting vast volumes of data scraped from free online sources with little to no consideration for the actual content. She believes it is important to bring in different skill-sets and perspectives in order to bridge knowledge gaps, “so that people who aren’t familiar with AI jargon can still contribute in a meaningful way,” and help to pre-empt issues which may arise due to a lack of cultural or contextual understanding.

Ashok Srivastava, SVP and Chief Data Officer at financial software company Intuit, similarly advocates for hiring people with a breadth of expertise and training. “We have emphasised that we need a pipeline from liberal arts to technology,” he recently told ZDNet. “We have hired people in the past with degrees in political science, art history, and English. These people have good perspective and bring diverse thought to the table… You can’t have conversational technology and approach comprehension without general knowledge.”

One huge barrier to this necessary diversity of thought in AI is cost. Data and processing power are hugely expensive, which unfortunately means gatekeeping of one form or another is an inevitability. A lot of the experimentation and research around AI is being taken out of governments and universities, and now resides in the corporate world, complete with its intellectual property laws. “You can’t penetrate this because it’s behind a profit wall; this is their competitive advantage,” says Moretti. “Right now there’s money in AI, and companies are making money by not working with social scientists.”

She is optimistic, though, that there will be companies which want to be responsible for the impact their technology has on customers’ lives. “A lot of people think impact is more important than intent, but the two need to be in balance with each other,” she says. “Intent is a good place to start when you’re building a product and don’t know what the impact is going to be yet. In the absence of rules and guidelines, intent manifests as a kind of de facto ‘this is what we were hoping to build’, and that can act as a kind of insurance for an organisation.”

This aforementioned absence of rules is another potential cause for concern, in that it allows a wild-west kind of innovation culture where companies are less beholden to questions of accountability. Trying at a government level to institute ethical guidelines, aside from basic blanket mandates such as “data should not discriminate” will be very tricky indeed, says Moretti: “Ethics, like data, is personal, subjective and contextual.”

Does it fall to people within the industry, then, to develop their own best practices, in the hopes that AI development will become self-policing? February 2018 saw the inaugural Conference on Fairness, Accountability and Transparency (FAT), a multidisciplinary summit which brings together researchers and practitioners to explore the ethical and legal challenges facing socio-technical systems. “This is really the first conference that is addressing the issues of fairness, accountability, ethics, and transparency in AI,” steering committee member Timnit Gebru told MIT Technology Review. “Machine learning people on their own cannot solve this problem. There are issues of transparency; there are issues of how the laws should be updated. If you’re going to talk about bias in healthcare, you want to talk to healthcare professionals about where the potential biases could be, and then you can think about how to have a machine learning based solution.”

Fairness and transparency, it seems, will be the criteria by which AI companies are judged, and ultimately, the innovators who succeed in this space in the long-term will be those who recognise that the new currency of the realm is trust.

“As more of this technology is rolled out, people are only going to get more switched on and savvy,” says Moretti. “Millennials and Gen Z are powerful consumers, they have buying power and they hold influence. We’re not dealing with a generation of people that you can lie to or try to hoodwink anymore, and so I think trust is going to become increasingly important.”

--

--