Doctor Leah Kaminsky discusses a new era in healthcare in her recent article that offers hope through new technology.
How Can AI Help?
AI will be able to predict changes to a patient’s health long before a doctor would be able to, perhaps even years in advance. Various studies have shown that AI, through continuous monitoring of a person’s pulse, can anticipate if that person is a candidate for a heart attack. Another method of gauging the risk of cardiovascular disease is by tracking patterns of blood vessels on the retina at the back of the eye.
The Future Exploration Network founder Ross Dawson joins numerous healthcare professionals and corporations in working towards an increase in preventative care and tracking health problems early before they develop.
Dawson explains that “Shifting societal attitudes, with increased expectations to live full and healthy lives, are driving these changes. This decade, the explosion of new technology and algorithms has given rise to deep learning in artificial intelligence, becoming vastly more effective at pattern recognition than humans.”
Rare genetic disorders can be detected by this technology simply by tracking facial features. Reams of data can be analyzed and sorted more quickly. Patterns that would otherwise be nearly impossible to detect can be identified by these systems. This form of AI is called deep learning and it promises to revolutionize medicine.
A Machine That Monitors Patients’ Through Walls
Doctor Kaminski’s article introduces Dina Katabi, who designed a device that sends out low-power wireless signals (electromagnetic waves) that reverberate off of a patient’s body. These signals can monitor a variety of patient characteristics (such as sleep and posture) for signs of disease. Every movement by a patient changes their surrounding electromagnetic field. Katabi’s device senses and tracks these minute movements.
The device is capable of monitoring:
- Sleep patterns
- Emotional state
Katabi explains: “We don’t see them, but they can complement our current knowledge in almost magical ways. Our new device is able to traverse walls and extract vital information which can augment our limited ability to perceive change. More and more elderly people are living alone, burdened by chronic disease, which leads to enormous safety concerns.”
Face2Gene Uses AI to Identify Rare Genetic Disorders
The Face2Gene app was developed by FDNA, a Boston startup. The app uses “deep phenotyping” (observing characteristics) of a patient’s face to identify possible genetic diseases.
Calculations were created by processing approximately 17,000 patient photos with one of 216 different genetic diseases. Certain disorders cause patients to present with distinctive facial characteristics that can be recognized by AI but aren’t visible to a doctor.
A 91% Success Ratio
Face2Gene’s system outperformed doctors in identifying certain syndromes in 91 percent of instances. These results are encouraging as it translates into faster medical intervention and shorter time to diagnosis.
AI will have an impact on the medical profession due to the fact that 10% of the population are affected by rare diseases.
AI and Alzheimer’s
Ben Franc, a professor at Stanford University, and his team are working on a pilot project to research if changes to brain metabolism that are evident in wholebody PET scans can predict the appearance of Alzheimer’s.
In one study using AI, algorithms were developed identifying glucose uptake in particular brain areas that are the earliest affected by the disease.
In another study involving photos from forty patients, AI identified Alzheimer’s an average of six years prior to doctors’ diagnosis.
Franc and his team of researchers are also of the opinion that combining data from MRI and PET scans can predict a patient’s subtype of breast cancer and their chances of recurrence-free survival. This new field, called radiomics, involves over 5,000 imaging features that AI is capable of analyzing.
Franc states that “Computers can find associations that would take humans a lifetime. AI gives us the opportunity to harness the expertise of exposure to millions of cases. This can lead to early diagnosis and hopefully more timely and effective treatment for patients.”
Mental Health and Virtual Therapists
Mental health conditions are reaching epidemic proportions in some countries and currently affect approximately 25% of the world’s population.
In a recent article published in the Journal of Ethics, Nicole Marinez-Martin of Stanford School of Biomedical Ethics and her colleagues wrote that these new techniques are expected to account for major advances to psychiatric outcomes by “improving predictions, diagnosis, and treatment of mental illness.”
AI has made treatment available to people who have had limited access to a therapist. For example, the Wysa bot, designed by therapists and AI researchers, is programmed to ask probing questions to help build mental resilience skills. This is accomplished through the use of evidence-based talking techniques such as cognitive behavioral therapy(CBT). By tuning into signs ques in a person’s tone and choice of vocabulary, it offers a new method of detecting mental health issues earlier.
Ellie, a virtual therapist developed by the University of Southern California’s Institute for Creative Technologies, is capable of analyzing over sixty reference points on a human face. This enables the bot to determine whether they are possibly anxious, depressed or suffering from other issues. There are other signs of mental conditions such as the duration of pauses before responding to a question, body posture, or even the number of times the subject nods. These tell-tale signs suggest the condition of the patient’s mental state.
But Do We Want A Machine That Does Not Worry About the Decisions It Makes To Make Those Decisions?
In a recent article published in the AMA Journal of Ethics, the authors say that “An algorithm will not lose sleep if it predicts with a high degree of confidence that a person would wish for a life-support machine to be turned off.”
On one hand an AI doctor does not have the human touch of a typical caregiver, but on the other hand it may be able to detect issues far earlier, thus giving us a lifesaving early warning signals.
Doctor Kaminsky concludes by saying “So, while we might not expect a computer to feel, we may want it to understand what and how we are feeling. ”