AI Personal Identification Can See Past Our Faces
AI personal identification schemes are more and more capable of discerning who we are, and how we are. Recent research shows how our children’s virtual reality (VR) gaming systems can name users’ identities just by measuring how they move their heads, arms, and hands. In at least one notable case, the VR system supplier’s Privacy Policy grants a long list of rights to use and distribute those patterns. Ultra-precise AI-observation can also help doctors make earlier diagnoses of serious conditions like Parkinson’s Disease. It is increasingly relevant and urgent to manage tradeoffs between the substantial benefits and threats presented by AI monitors.
VR Movement Recognition
All VR systems perceive head orientation, most look at head position, and many follow hand orientations and position. Some can even track feet, chest, elbows, and knees. The capability is inexpensive. A simple machine learning model with only 5 minutes of training can identify 1 user out of 500 with 95% accuracy. That recognition skill is valuable. In industrial situations, like using VR to design advanced prototypes for new cars, you want to ensure that only authorized staff can tour the creation. Consider also the benefits for access control in the home. An AI attendant could prevent your six-year-old from playing a first-person-shooter VR game with gory and suggestive scenes.
Video Cameras Can Recognize Your Body, Too
Video cameras, too, can observe body shape and motion. A camera system won’t need to see our faces to know who we are. It might even recognize our maladies; that’s been done commercially with animals for several years. An Ireland startup, Cainthus, uses AI video analysis to spot the early and hardly perceptible onset of lameness in cattle. The software watches the animal’s gate and stance for tell-tale patterns and identifies the painful problem weeks before most veterinarians would. If you’ve ever seen a lame cow, you appreciate the acute suffering that the solution avoids. Software also takes video measurements of human bodies to assess medical and even mental states. That application dramatically raises the stakes for the opportunities and the risks.
Spotting Parkinson’s Sooner
Early Parkinson’s Disease diagnoses can greatly improve outcomes. But signs start gradually — sometimes with a barely noticeable tremor. There is no one test to prove that you have Parkinson’s. Specialists judge symptoms, movement, and body control in their diagnoses. Even for a trained professional, it is hard to tell symptoms of Parkinson’s apart from those of other diseases with similar motor impacts. Researchers announced last month an AI-observer of body movement and positioning to assist medical professionals in making earlier and more accurate Parkinson’s diagnoses. A watchful AI can detect the slightest tremors and compare them with a vast, ever-growing store of examples. The AI observer supports the doctor, giving a valuable additional opinion. High-quality data, methodically and objectively analyzed, can provide life-changing results.
Bad Data, Weak Assumptions, Off-Target Results
Used with poor planning and weak evidence, body and movement analysis can reinforce stereotypes and offer false assurances. For example, widely deployed school classroom monitoring systems in China alert parents that their children are inattentive or disruptive if they are not seated and facing their teacher. The parents may feel empowered to help their children, but there is scant evidence of learning benefits from their digital oversight. Amazon is installing driver video monitoring AI in delivery vans in Denver, CO. If you yawn or glance at your phone while driving, your supervisor will be notified and likely talk with you at the end of your shift. This week, all drivers must either sign a legal release of the captured data or quit their jobs. Amazon says that awareness improves safety. I want to be a safer driver, too. If the digital agent’s feedback was given only to me and styled as expert coaching, I would really appreciate it.
We’ve Already Been Here: Facial Recognition
Societies are already wrestling with when and how to use another pattern-matching AI: facial recognition. It has productive uses. The FBI has leveraged facial recognition algorithms to successfully ID members of the mob that stormed Capitol Hill in January. However, thanks to poorly-selected training data (biased data), the technology is notoriously less accurate for non-white faces. A 2019 U.S. Government study found Asian and Black people were up to 100 times more likely than white men to be misidentified by face-watching AIs. AI identification of faces and bodies and movement is like Alfred Nobel’s dynamite. It gets the job done like nothing else: use with extreme caution.
Who Should Know You? When?
In a few years your home’s virtual assistant may say, “Good morning. You’re slightly favoring your right knee; better take a couple of aspirin. Should I make an appointment with your orthopedist?”
After an evening out your car may be more than a bystander: “Sorry, you’re drunk. I’m not letting you drive me.” If it’s self-driving it may add, “I’ll drive myself tonight. Why don’t you take a nap? I’ll pull over and roll down the window if you look like you’re going to throw up.” Who should know that you have a bad knee, or that you tried to drive drunk last night: Your doctor? Apple? Your spouse? Your employer? The government? Every culture and every person has to think about this most personal and increasingly relevant question. AI’s ability to recognize who we are and how we are is racing forward with immense potential to alleviate suffering and to invade our privacy. It is imperative that we actively manage where and how to use it. Title Images:
By Igel B TyMaHe – Own work. This file was derived from: Paz e bem na lente.svg, CC0, https://commons.wikimedia.org/w/index.php?curid=49571409
Photo by National Cancer Institute on Unsplash
Photo by Barbara Zandoval on Unsplash