Automating Care: about our new series on the rise of AI in caregiving | US news


America is facing a caring crisis, with too few careworkers able to take the difficult underpaid jobs that help the nation’s elderly and those with disabilities live with dignity.

Who – or what – will step into the breach?

Increasingly the answer seems to be devices and automated systems powered by artificial intelligence (AI). In nursing facilities, sensors monitor the movements of patients and alert human staff if they fall over or take a concerning number of bathroom breaks. In private homes, cameras watch elderly residents and ping their children if they wander somewhere unsafe. Artificial companions keep the lonely company. Cellphone apps track home healthcare workers’ physical location and count every minute they dedicate to their clients. Psychiatrists say their field is ripe for AI-based therapy.

The new Guardian US series Automating Care will scrutinize this monumental shift in the way society cares for those in need – and will consider the risks as well as the benefits.

Q&A

What is AI?

Show

Artificial intelligence (AI) refers to computer systems that do things that normally require human intelligence. While the holy grail of AI is a computer system that is indistinguishable from a human mind, there are several forms of specialized, but limited, AI that are already a part of our everyday lives. AI may be used with cameras to identify someone based on their face, to power virtual companions, and to determine whether a patient is at a high risk for disease.

AI shouldn’t be confused with other kinds of algorithms. The simplest definition of an algorithm is that it’s a series of instructions needed to complete a task. For example, a thermostat in your home is equipped with sensors to detect temperature and instructions to turn on or off as needed. This is not the same as artificial intelligence.

The rollout of AI today has been made possible by decades of research on topics including computer vision, which enables computers to perceive and interpret the visual world; natural language processing, allowing them to interpret language; and machine learning, a way for computers to improve as they encounter new data.

AI allows us to automate tasks, gather insights from huge datasets, and complement human expertise. But a rich body of scholarship has also begun to document its pitfalls. For example, automated systems are often trained on huge troves of historical digital data. As many widely publicized cases show, these datasets often reflect past racial disparities, which AI systems learn from and replicate.

Moreover, some of these systems are difficult for outsiders to interpret due to an intentional lack of transparency or the use of genuinely complex methods.

Thank you for your feedback.

Artificial intelligence (AI) refers to computer systems that do things that normally require human intelligence. While the holy grail of AI is a computer system that is indistinguishable from a human mind, there are several forms of specialized AI that have already been rolled out in the caring industry.

The companies and government agencies creating these systems, and some care providers, say that they can help keep patients safe, free human caregivers from rote tasks, allow seniors to continue living in their own homes for longer, and cut down on fraud, waste and abuse.

Critics raise red flags around bias, surveillance and the erosion of autonomy in digital care systems. Assumptions about how the elderly and disabled should behave can be invisibly baked into the code. Systems that prioritize safety from falls over freedom of movement implicitly marginalize the elderly’s desire for privacy and self-determination in favor of assuaging their adult children’s fears.

An electronic timesheet that only allows caregivers to clock in or out from inside a client’s house assumes the disabled are homebound, not living active and independent lives. Systems that track a worker’s every movement and minute betray a deep cultural anxiety about the value of care and those who perform it, especially the Black and immigrant women who make up the majority of the care workforce.

The AI industry as a whole is also reckoning with other areas of bias. AI systems are trained on huge troves of historical digital data, but as many widely publicized cases show, these datasets often reflect past racial disparities in how patients are treated, which AI systems learn from and replicate.

Researchers have found that those being monitored by AI systems can also experience them as intrusive, fear that the new tools will limit their independence, and may prefer human contact to a persistent digital gaze.

Artificial Care is guest-edited by Virginia Eubanks, the political scientist and author of Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, and by Alexandra Mateescu, a researcher at the Labor Futures initiative at Data & Society and co-author of AI In Context: The Labor of Integrating New Technologies.

It is based on research commissioned by the Guardian from the Social Science Research Council’s Just Tech program.



Source link

Add a Comment

Your email address will not be published.