Imagine calling a suicide prevention hotline in an emergency. Do you ask about our data collection policy? Do you believe your data is protected and secure? Recent events may prompt us to consider your responses more carefully.
Mental health technologies such as bots and chat lines can help people in crisis. They are some of the most vulnerable users of any technology and should expect their data to be kept safe and confidential. Unfortunately, recent dramatic examples show highly sensitive data being misused. Our own research has found that when collecting data, developers of mental health-based AI algorithms only test whether they work. They generally do not address ethical, privacy, and political concerns about how they may be used. The same standards of healthcare ethics should apply.
PoliticoRecently, Crisis Text Line, a non-profit organization that claims to be a safe and confidential resource for people in crisis, has used the data it collects from its users with Loris AI, a for-profit spin-off company that develops customer service. reported that it shared with the software. Crisis Text Line officials initially defended the data exchange as ethical and “fully compliant with the law.” Within days, however, the organization said it had ended its data-sharing relationship with Loris AI, despite claiming that the data had been “securely processed, anonymized and stripped of personally identifiable information.” Announced.
Loris AI, a company that uses artificial intelligence to develop chatbot-based customer service products, has, for example, used over 100 million crisis text line exchanges to help service agents understand customer sentiment. I was using data generated by Loris AI has reportedly deleted all data received from Crisis Text Line, but it’s unclear if that extends to algorithms trained on that data.
This and similar incidents highlight the growing value of mental health data as part of machine learning, and demonstrate the regulatory gray zones in which these data flow. The health and privacy of vulnerable and at-risk people are at stake. They bear the consequences of poorly designed digital technology. In 2018, U.S. border officials denied entry to several Canadians who had survived suicide attempts, based on information in police databases. Think about it. Non-criminal mental health information was shared through law enforcement databases to flag people trying to cross the border.
Policy makers and regulators need evidence for proper governance of artificial intelligence, not to mention its use in mental health products.
An online mental health initiative examined 132 studies that tested automated technologies such as chatbots. Investigators in his 85% of studies did not mention how the technology could be used in negative ways, neither in the study design nor in reporting the results. This was despite the fact that some technologies pose a serious risk of harm. For example, 53 studies used publicly available social media data for predictive purposes, such as determining an individual’s mental health diagnosis, often without consent. None of the studies we investigated addressed the discrimination people might experience if these data were made public.
Few studies included the views of people who had used mental health services. Researchers in just 3% of studies appeared to include input from people who used mental health services in their design, evaluation, or implementation in a substantive way. In other words, the research that drives this field sorely lacks the participation of those who bear the consequences of these technologies.
Mental health AI developers explore the long-term and potential negative impacts of using various mental health technologies, including how data is used and what happens when the technology fails users need to do it. Editors of scholarly journals, as well as institutional review board members, funders, etc., should request this to be published. These requirements should accompany the urgent adoption of standards that promote hands-on experience in mental health research.
In policy, most U.S. states have specific protections for typical mental health information, but emerging forms of data on mental health appear to be only partially covered. and Accountability Act (HIPAA) do not apply to consumer healthcare products, including technology embedded in AI-based mental health products. The Federal Drug Administration (FDA) and Federal Trade Commission (FTC) may have a role to play in evaluating these D2C technologies and their claims. However, the FDA’s scope does not appear to extend to health data collectors such as health apps, websites, and social networks, excluding most “indirect” health data. Also, the FTC does not cover data collected by non-profit organizations, which was a major concern raised in the Crisis Text Line case.
Generating data about human suffering is clearly far more important than potential violations of privacy. It also poses risks to open and free societies. The potential for people to police their behavior for fear of the unpredictable digitization of their inner world will have profound social consequences. A world that needs to find expert ‘social media analysts’ to help create content that looks ‘mentally healthy’ or that employers are making prospective employees’ social media habits about ‘mental health risks’. Imagine a world that screens for
Data for all people, whether engaged in mental health services or not, could soon be used to predict future distress and impairment.Experiments with AI and big data , our daily activities have evolved to elaborate new forms of “mental health-related data” that may escape current regulation. Apple is currently working with multinational biotech company Biogen and the University of California, Los Angeles to explore using phone sensor data, such as movement and sleep patterns, to infer mental health and cognitive decline. increase.
The theory is that if we process enough data points about a person’s behavior, we’ll see signs of illness or disability. Such sensitive data creates new opportunities for discriminatory, prejudiced and invasive decision-making about individuals and groups. How are data labeled ‘depression’ or ‘cognitive impairment’ classified, or likely to become Those things—do they affect a person’s insurance rate? Can an individual object to such designations before the data is transferred to another entity?
Things are moving fast in the digital mental health sector, and more companies are seeing value in using people’s data for mental health purposes. A World Economic Forum report values the global digital health market at $118 billion worldwide, and lists mental health as one of the fastest growing sectors. With a dizzying array of start-ups vying to be the next big thing in mental health, the ‘digital behavioral health’ firm is accumulating his $1.8 billion venture capital in 2020 alone. is reported.
This flow of private capital stands in stark contrast to underfunded healthcare systems where people struggle to access adequate services. For many people, cheap online alternatives to in-person support may seem like the only option, but that option creates new vulnerabilities that we’re only beginning to understand.
if you need help
If you or someone you know is struggling or considering suicide, help is available. Call or text 988 Suicide & Crisis Lifeline 988, or go online. lifeline chat.
This is an opinion and analysis article and views expressed by the author or authors are not necessarily Scientific American.
.