Three frontier developments stand out in terms of both their promised rewards and their potential risks to equality. These are human augmentation, sensory AI, and geographic AI.
Variously described as biohacking or Human 2.0, human augmentation technologies have the potential to enhance human performance for good or ill.
Some of the most promising developments aim to improve the lives of people with disabilities. AI-powered exoskeletons can enable disabled individuals or older workers to accomplish physical tasks that were previously impossible. Chinese startup CloudMinds has developed a smart helmet called Meta, which uses a combination of smart sensors, visual recognition, and AI to help visually-impaired people safely navigate urban environments. Using technology similar to autonomous driving, sensors beam data on location and obstacles to a central cloud system, analyze it, and then relay vocal directions and other information back to the user. The system could be used to read road signs and notices, or potentially even translate Braille notices printed in foreign languages.
For sign-language users, a major challenge is how to communicate with the majority of people who do not know sign language. A promising development here is the sign-language glove developed by researchers at Cornell University. Users wear a right-hand glove stitched with sensors that measure the orientation of the hand and flex of the fingers during signing. These electrical signals are then encoded as data and analyzed by an algorithm that learns to read the user’s signing patterns and convert these to spoken words. In trials, the system achieved 98% accuracy in translation.
Scientists have already shown how brain implants can help paralyzed individuals operate robotic arms and exoskeleton suits. Elon Musk’s NeuraLink aims to go one step further, implanting flexible hair-thin threads to connect the human brain to AI systems that can operate phones and computers. The MIT Media Lab is pioneering a voiceless communications technology — dubbed Alter Ego — that allows users to communicate with computers and AI systems without opening their mouths, offering hope to millions of people afflicted by speech disorders. Transcranial stimulation — an experimental technology still in its infancy — is being used by sports teams and students to build muscle memory and greater concentrative power.
Despite these tremendous breakthroughs, the potential for new biases and inequalities remains. Apart from the obvious concerns about privacy associated with invasive technologies, cognitive or physical data could be misused — for example in recruiting or promotion decisions, in the administration of justice, or in granting (or denying) access to public services. Moreover, access to basic digital technology remains a significant barrier, with almost half the world’s population still excluded from the internet.
The sociologist Christoph Lutz observes that traditionally disadvantaged citizens are similarly disadvantaged on the internet, for example by having limited access to technology, restricted use opportunities, and by lacking important digital skills. In fact, many fear that the affluent will be better able to afford costly performance-enhancing technology, perpetuating existing disparities in education and the job market. Educational performance may come to depend less and less on how hard you study in college, and more and more on what kind of technology you can afford. Yuval Harari, the author of Homo Deus, has argued that AI technologies could eventually splinter humanity into two classes he labels “the Gods and the Useless” — those who can avail themselves of performance-augmenting AI and those who cannot.
Sensory Imbalance
The human senses — sight, hearing, smell, touch, and taste — represent a rich territory for the next generation of AI technologies and applications.
Take our voices, for example. The pitch, tone, timbre, and vocabulary used can provide important clues to our physical and mental well-being. The journal Nature recently reported how voice analysis algorithms are being developed to spot signs of depression (where the frequency and amplitude of speech decline) and Alzheimer’s Disease (where sufferers use more pronouns than nouns as they forget common terms). Advances in digital olfaction — the use of digital technologies that mimic the sense of smell — may soon be used to detect cancer and other diseases before the symptoms become apparent. Given increasing concern around access to healthcare in the U.S. and other economies, these developments offer the potential for early, low-cost detection of major chronic diseases: imagine just talking into your iPhone for a daily check-up.
Yet, the potential for bias is also there: users’ data could be screened without their knowledge and could ultimately be used to cherry-pick lower-risk or healthier individuals for jobs, healthcare coverage, and life insurance, for example. The European Commission has warned that AI may perpetuate historical imbalances or inequality in society, particularly where there are data gaps along gender, racial, or ethnic lines. In healthcare, for example, disease symptoms often vary between males and females, creating the risk of bias or misdiagnosis in AI-based systems of disease detection and monitoring that are trained on gendered datasets. For example, while AI systems have been shown to be as accurate as dermatologists in detecting melanomas, these datasets are often not representative of the population at large with different skin types. The lack of representation of racial minorities in AI training data has been investigated by Joy Buolamwini and Timnit Gebru, who found that several major facial recognition datasets were “overwhelmingly composed of lighter-skinned subjects,” with significantly lower accuracy rates for females and darker-skinned subjects.
Geographic Tracking
Imagine being able to look at images of a city and identify patterns of inequality and urban deprivation.
This vision is now a step closer thanks to a team of scientists from Imperial College London, who developed an algorithm that uses Google Street View images of cities to identify patterns of inequality in incomes, quality of life, and health outcomes. I interviewed Dr. Esra Suel, an expert in transport planning who led the pilot project, who observed: “We wanted to understand how real people experience cities — their homes, neighborhoods, green spaces, environment, and access to urban services such as shops, schools, and sanitation. Yet, existing measures do not capture the complexity of their experiences in their entirety.” Dr. Suel see three major benefits as visual AI systems evolve in the future. “First, they can complement official statistics such as the census in providing timelier measures of inequality, so that governments can direct resources to areas based on changing needs. Second, they can uncover pockets of poverty that may be concealed by high average incomes — the poor neighborhood located side-by-side with a more plush city area, for example. Third, the use of visual AI could be a game changer for developing countries, which often lack the resources to collect official data on inequality.”
The element of speed becomes even more critical in tracking and controlling infectious diseases, which are a major source of health and educational inequality in the developing world. Canadian startup BlueDot used airport flight data and population grids to model the spread of the Zika virus from its origin in Brazil. More recently, BlueDot sounded an early alarm around the spread of the coronavirus in the Chinese city of Wuhan, using a combination of news reports, animal disease tracking, and airline ticketing data.
Yet this increased ability to digitally map and analyze our environs carries risks. One concern is that geographic AI systems could lead to a new era of “digital redlining” — a reprise of the practice of government-backed mortgage providers denying loans to residents of minority neighborhoods regardless of their creditworthiness, which emerged in the U.S. in the 1930s, on the justification that those loans were “high risk.” Digital red-lining could lead businesses to eschew lower-income areas, for example by denying access to insurance coverage or imposing higher premiums. Even worse, geographic algorithms could make it easier for unscrupulous operators to target areas and households with high degrees of a dependency, for example to gambling or alcohol, and to target them with predatory loans.
Moreover, the predominant use of such systems in poorer areas could itself be deemed unfair or discriminatory, to the extent that they target particular areas or socio-economic groups. To take one example, more and more governments are using AI systems in their welfare and criminal justice systems. In the Netherlands, a court recently ordered the government to stop using an AI-based welfare surveillance system to screen applications for fraud on the grounds that it violated human rights and was being used predominantly in poorer immigrant neighborhoods.