The Spooky, Loosely Regulated World of Online Therapy

Like many of the businesses offering therapy online, the service promotes itself as a seamless way to access mental health services.

In Depth
The Spooky, Loosely Regulated World of Online Therapy

Starting treatment with Better Help, one of the most prominent “therapy-on-demand” apps to launch over the last few years, is easy, which is more or less the point. Like many of the businesses offering therapy online, the service promotes itself as a seamless way to access mental health services: “Message your therapist anytime, from anywhere.” For under $40 a week, a subscriber can text, call, or video chat a licensed counselor. As the company’s founder has said, one of Better Help’s central missions is to “destigmatize mental health.” To this end, it has partnered with the NBA player Kevin Love and—somewhat controversially—enlisted YouTube personalities to create sponsored content about their own depression. Late last year, noted dubstep celebrity Bassnectar donated a thousand free Better Help subscriptions to his fans.

Better Help’s users skew young and female, and it’s been downloaded nearly a half a million times in the last year, according to a mobile analytics and intelligence firm called Appfigures. Essentially, the company operates as a conduit between people looking for therapy and counselors working on a contract basis. It also operates a somewhat baffling array of websites, sorted by demographic interest, which funnel back into its telemedicine service: Pride Counseling for LGBTQ users, Faithful Counseling for those seeking therapy from a “Biblical perspective,” and Teen Counseling for, well, the teens. All of these divisions are advertised as “100% private,” operating in accordance with HIPAA, the suite of regulations guarding medical data. But as with many of the endlessly iterating companies that generate the vast ecosystem of health technology, how “privacy” applies when it comes to making consumers out of patients is a more fluid issue.

In order to understand how Better Help handles its users’ data, we signed up for the service and monitored what kinds of information it was collecting and sending elsewhere. According to the company, the platform encrypts information shared with therapists, and the licensed counselors that contract with the service are prevented by the regulations of their profession from sharing information about patients, unless there is a risk of physical harm. But as we found when we monitored the app, the realities of advertising on the internet, and the web of third-party services apps like Better Help tend to use, means some sensitive information does end up being shared—all with the ostensible goal of better tracking user behavior, and perhaps giving social media companies an easy way to see who’s feeling depressed.

A company automatically telling Snapchat and Pinterest you’re signing up for therapy still feels pretty spooky

Of all the information the average internet user shares with the technology companies that dominate their lives, health data—and especially mental health data–is some of the most valuable, and controversial: Though social media conditions a person to share every aspect of their being, at every moment, a company automatically telling Snapchat and Pinterest you’re signing up for therapy still feels pretty spooky, even if it’s covered in the fine print. It also brings up questions about how a person’s intimate, supposedly private sessions might be exploited by advertisers, an industry that isn’t exactly known for operating in good faith. And while there’s no reason to believe the information Better Help is collecting will be weaponized, there is still some stigma in struggling with mental health. Insurance companies and employers are barred by law from discriminating against people based on their mental health histories; that doesn’t mean they always follow the rules. It all depends on how much you trust the company with the information you’re feeding it: In this case, an app developed by an Israeli serial entrepreneur who is quick to note he is definitely not a medical professional, nor is he pretending to be.

On one hand, this is how the internet works now. When we brought our concerns to Better Help the company essentially brushed them off, telling us their methods were standard and that they “typically far exceed all applicable regulatory, ethical and legal requirements.” And it’s true: There are no laws against a therapy app telling Facebook every time a person talks to their therapist, or sharing patients’ pseudo-anonymous feelings about suicide with an analytics company that helps clients measure how “addicted” users are to an app. But it is a particularly stark illustration of how limited medical privacy regulations are in the expanding world of online health. Unless the people who trust Better Help deftly analyze the fine print, they might not have much of an idea of how far their intimate information is traveling, in a way that’s designed to make companies bigger and richer while patients become more easily gamed.


When we first downloaded the app, Better Help’s “intake” process, which helps the company match patients to providers, guided us through a brief survey: It catalogues gender, age, and sexual orientation, along with more specific areas of concern, like the last time a person had suicidal thoughts or if they’d ever been to therapy before. From that moment, Better Help began to silently slip data to dozens of third parties, monitoring our behavior online and signaling to companies like Facebook and Google and Snapchat and Pinterest that we were considering Better Help treatment. (To see this data, Jezebel used software to collect, analyze, and eavesdrop on the data sent from the application to its various servers.)

Facebook, for instance, is alerted every time a person opens the app, essentially signaling to the social media company how often we were going to a “session” and when we booked our appointments. (To confirm Facebook’s retention of this information, we downloaded personal data from Facebook and identified the associated records from Better Help.) During a session with a therapist, we found that metadata from every message, though not its contents, was also sent to the social media company, meaning that Facebook knew what time of day we were going to therapy, our approximate location, and how long we were chatting on the app. When we asked about this directly, Better Help declined to elaborate on why a social media company needed to know quite so much about when and how we were asking for help— details that were connected to our Facebook profiles, and thus our identities, a process the social media company uses to sell targeted ads but which feels somewhat less nefarious when it’s done to approximate shoe size rather than figure out how much distress a person is in.

By tracking and cataloguing people’s habits and desires, the theory goes, companies can figure out how to best encourage their users to open an app again and again.

A research and analytics firm, MixPanel, got much more information about us, anonymized in the way medical data has been required to be since Congress passed HIPAA in the ‘90s. (Research suggests that even when information like this is collected by hospitals or insurance providers, it’s often quite easy to match it back to an individual patient.) HIPAA was developed for anonymizing paperwork transferred between doctors; it’s been less useful in determining how medical privacy works in a paperless world. In Better Help’s case, we were assigned a random number to identify us, and our answers to the therapy intake questions were forwarded on to MixPanel. Based on what we saw, MixPanel knew where we were and what device we were using; approximately how old we were, whether we considered ourselves spiritual or religious, our financial status, and our sexual orientation. It also got information about our broader mental health histories: How long it had been since we’d been in therapy, and when we’d last had a plan to kill ourselves.

MixPanel is the kind of startup that’s omnipresent yet mostly invisible to people who don’t work in tech; it’s used by everyone from Uber and AirBNB to BMW. Its basic concession is producing monetizable data out of literally any human behavior: By tracking and cataloguing people’s habits and desires, the theory goes, companies can figure out how to best encourage their users to open an app again and again. There’s pretty good cause to believe that Anna Weiner’s recently released Uncanny Valley, a book about working in the tech industry that is careful to avoid naming any company directly, is largely about her experience working for MixPanel in its early days. Weiner spends a good chunk of the book wrestling with the ethical implications of data-mining, and eventually quits. “Depending on the metadata, users’ actions could be scrutinized down to the bone, at the most granular level imaginable,” she writes:

Data could be segmented by anything an app collected—age, gender, political affiliation, hair color, dietary restrictions, body weight, income bracket, favorite movies, education, kinks, proclivities—plus some IP-based defaults, like country, city, cell phone carrier, device type, and a unique device identification code. If women in Boise were using an exercise app primarily between the hours of nine and eleven in the morning—only once a month, mostly on Sunday, and for an average of twenty-nine minutes—the software could know. If people on a dating website were messaging everyone within walking distance who practiced yoga, trimmed their pubic hair, and were usually monogomous but looking for a three-some during a stint in New Orleans, the software could know that, too. All customers had to do was run a report; all they had to do was ask.

Later in the book, Weiner meets up with a former coworker for drinks, an engineer who’d been laid off. He’d been doing some thinking, he said, about the implications of the job: “Come on,” he told her, “We worked at a surveillance company.” Eventually, she comes to agree.


Better Help isn’t alone in sending data to social media or analytics companies—its competitor Talk Space, as we found, shares metadata about patient “visits” with MixPanel, including the character length of a message, and tells Facebook every time a person opens the app. Better Help’s founder, in an email, assured us “there is nothing we take more seriously than the security, privacy, and confidentiality of our members,” and explained that “in order to measure and monitor the overall use and engagement of the platform, we also use standard methods and tools which do not include any personally identifiable information.” When we followed up asking for clarity on how Better Help and third-party companies use this information, and why so much detail was shared with Facebook, we didn’t hear back. Facebook didn’t respond to our questions either, but the company’s stated intention for tracking user behaviors like these is to “personalize your experience” and show more relevant ads.

Nicole Martinez-Martin, an assistant professor with the Stanford Center for Biomedical Ethics, has been deeply involved in research around healthcare technologies and how personal data is collected. Even if data is de-identified, she says, it’s possible to make inferences about individuals or groups of people using the information collected by therapy apps. The issue lies squarely at an unregulated—or at least, underdeveloped—intersection of healthcare and consumer practice. “The way I’ve seen it phrased is that you can be de-identified, but you can still be reachable,” she says. Companies like Better Help aren’t releasing sensitive data that includes patients’ full name or email address, but through their partnerships are facilitating practices that make it very easy to find and communicate with people who are seeking therapy. The information they’re collecting would also make it easy to identify patients as a class—as a person who is in therapy every day; as a person with recent suicidal ideation; as a person without much money who is deeply depressed—and treat them a certain way.

Martinez-Martin points to a recent study that found a vast majority of top-rated depression and smoking cessation apps transmit information about their users to Facebook and Google, most often without informing the people whose valuable data was being collected. There just isn’t a regulatory framework to deal with a person’s vast and increasingly medicalized online footprint yet. Better Help’s privacy policy says it “will never sell or rent any information you shared on the platform,” while also noting it might disclose a “minimum necessary information” to third-party partners for customer service or targeted content,” a disclosure that technically covers the information it’s sharing, but can’t quite capture the broader implications.

It’s not difficult to imagine what use a social media company might have for tracking who is struggling with their mental health, and how often they seek therapy: In 2017, a leaked Facebook sales pitch showed the company boasting of how its algorithm could identify and target teenagers who were feeling “insecure” and “worthless,” “overwhelmed” or “anxious.” “Thinking about how some of that ad targeting works adds that extra layer of how troubling this can be,” says Martinez-Martin. “These are people reaching out for help, and may already be at a vulnerable point… Someone who is in a fairly deep depression could be targeted in certain ways by ads that may be pushing on those vulnerable points.” It’s an issue that’s been relevant across platforms: A few years ago, shady rehab ventures gamed Google’s advertising algorithm to target desperate people struggling with alcohol and drugs.

In a moment when healthcare information is one of the tech industry’s most valuable currencies, the Department of Health and Human Services is struggling to redefine medical privacy. “Those of us who believe in technology’s potential for good must lean into this conversation and embrace that it will be messy, incremental, and iterative,” the agency’s chief data officer said after a comprehensive report found HIPAA protections to be wildly insufficient given how data moves today. Recently, the department proposed rules that would facilitate data-sharing between software companies and hospitals, creating common standards so patients could opt to share their personal information with whatever apps they choose. Amazon and Apple have indirectly indicated their support, and the rules could break open the $3.5 trillion healthcare industry for technology companies to finally truly disrupt. Patient advocates and many hospital systems are concerned.

Supporters of the new rules say that it would encourage informed consent, forcing patients to think more critically about with whom they’re sharing their medical data. And perhaps many patients would be fine sharing their intake forms with MixPanel or visit lengths with Facebook or disclosing to Google they asked for help—but they would have to be in a position to know that’s what they were signing up for when they logged into a therapy app, rather than wondering, as they often do, how exactly Instagram knows so much about how they feel.

(Updated 3/2/22 with new details)

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin