February 23, 2026

Industry Science Can Be Real Science. Here’s What It Requires.

Escrito por
Nick Forand, PhD, ABPP
,
,
Head of Clinical Innovation and Research at Two Chairs
,
Revisado por
Actualizado el

The science of implementing evidence-based practices in behavioral health is well established and respected. But when research comes from inside a healthcare company, it’s often viewed through a different lens. People want to trust outcomes data. They also know how easy it is to shape a story when your business benefits from the conclusion.

That concern isn’t limited to industry. Everyone is incentivized to publish positive results, whether to support their career or earn recognition from peers. Psychological science is littered with examples of fabricated data and retracted studies, so it’s unrealistic to think that bias only applies to industry science. Industry skepticism comes from a different place. When commercial interests sit next to scientific ones, people question rigor. There are high profile examples of companies manipulating data or even ghost-writing studies in order to improve their outcomes.  So the concern isn’t irrational.

At Two Chairs, we recently published a peer-reviewed paper in Frontiers in Health Services evaluating a large-scale implementation of measurement-based care (MBC) inside a technology-supported psychotherapy practice. The headline finding was a 24% relative improvement in symptom outcomes. In practical terms, that translates to roughly a five percentage point increase in the frequency of meaningful symptom improvement from pre- to post-implementation.

If you’re a payer or a clinician, the right question isn’t whether that number sounds good. It’s whether the method stands up to scrutiny.

What We Studied — And Why Scale Matters

This was not a small pilot or a curated cohort designed to tell a clean story. The study included 755 clinicians and 18,721 patients. We analyzed outcomes using routine PHQ-9 and GAD-7 measures already embedded in care. Every eligible patient with a baseline score of 5 or greater  was included. If someone started out essentially asymptomatic, there was no meaningful room for improvement, so those cases were excluded.

Behavioral health often leans on tightly controlled RCTs and assumes those findings will translate into routine practice. RCTs are powerful tools when feasible. If you can tightly control an intervention, you’re in a stronger position to draw causal conclusions.

But real-world practice has its own value. In a large behavioral health organization, you have diversity of clinicians, diversity of patients, and all the messy variables that come with everyday care. If you can demonstrate impact in that environment, the results carry real weight.

We didn’t run a randomized trial. We had a real implementation to execute and time constraints did not permit us to roll out these changes to only a portion of our clinical population, so we used the best methods that we had available to us when it came time to evaluate our outcomes: we controlled for as many alternative explanations as we could.

Measurement-Based Care Is a Clinical Skill

Many organizations say they practice measurement-based care. Often what that means is they launched a measure. They may encourage patients to complete symptom scales and maybe show those scores to clinicians. But that is not the same as practicing MBC.

Just putting a measure in place is not doing measurement-based care. Measurement-based care is a clinical skill. It requires interpreting data, discussing it with the client, adjusting treatment accordingly, and then reassessing to see if those changes matter.

Training a clinical skill is complex. You have to understand where clinicians are starting from and what gaps exist in knowledge and behavior. You need leadership support, dedicated training time, aligned supervision, and reinforcement mechanisms. You need to address natural resistance. If clinicians are asked to do something new without clear justification and organizational backing, it won’t stick.

And sustainment matters. How are new clinicians trained? How do you prevent drift over time? Implementation science treats these questions as central, not peripheral.

Resisting the Urge to Cherry-Pick

There are many ways to distort industry research. One of the most common is selecting the data that tells the most favorable story.

We chose not to do that.

Every eligible therapist was included. We did not filter to top performers. We did not engineer subgroups. Aside from the baseline severity threshold, we did no trimming of the dataset.

That choice forces you to confront real-world complexity. In a non-randomized design, you must ask: did outcomes improve because therapists changed? Because patients changed? Because something else shifted?

We examined therapist-level performance among those therapists who contributed enough patient outcomes during both the pre and post-implementation periods . Approximately 95% of those therapists showed improved patient outcomes after the implementation period. That makes it unlikely the effect was driven by replacing weaker clinicians with stronger ones.

We also statistically adjusted for changes in patient population, including symptom severity and other factors that are associated with treatment outcomes. After controlling for those variables, the improvement remained.

That doesn’t make this an RCT. But it does add credibility. 

What Actually Changed in Care

Outcomes don’t improve because you launch a training. They improve because clinician behavior changes.

We examined mechanisms alongside outcomes. In any intervention study, you need evidence that the hypothesized driver of change actually occurred. In a medication trial, you confirm that participants took the medication, or examine that the dose of medication is related to the magnitude of the change.

In our case, we analyzed therapist-documented discussions of measures in session. During the implementation period, those discussions increased. We also saw meaningful shifts in how alliance data were used. Once clinicians were trained on how to interpret and discuss alliance measures appropriately, engagement with that data increased incrementally over time.

That pattern supports what we would expect from a genuine implementation: behavior changes precede and accompany outcome changes.

The Responsibility of Publishing Industry Research

Publishing outcomes from within a commercial organization carries risk. You may not like what you find. You expose variability. You invite scrutiny.

There are baseline obligations: independent ethical review, protection of patient privacy, de-identification of data, and adherence to human subjects research standards. Those are non-negotiable.

But the deeper responsibility is transparency. If an organization claims evidence-based care yet does not measure outcomes at scale, the claim is incomplete. If it measures outcomes but only shares favorable subsets, credibility erodes.

Industry science can be real science. But it requires discipline: full-population reporting, explicit acknowledgment of limitations, statistical adjustment for bias, attention to mechanisms, and peer-reviewed publication.

If behavioral health wants payer trust, value-based reimbursement, and public confidence, we have to move beyond treating measurement as a branding exercise. Evidence-based care fails when implementation is weak. And weak implementation is common because it is easier to declare than to execute.

The real question isn’t whether commercial organizations are capable of rigorous research. It’s whether they are willing to hold themselves to the same standards they expect from academia.

Credibility is not a slogan. It is earned by putting the methods in plain view and allowing the field to evaluate them. That is what implementation science demands — and what patients deserve.

Nick Forand, PhD, is the Principal Clinical Research Scientist at Two Chairs and first author of "The impact of measurement based care at scale: examining the effects of implementation on patient outcomes and provider behaviors," published in Frontiers in Health Services.

Permítanos encontrar el terapeuta adecuado para usted

Permítanos encontrar el terapeuta adecuado para usted

Un consultorio de salud mental creado para ti

Siempre estamos interesados en conocer a médicos talentosos e impulsados por una misión. Eche un vistazo a nuestros puestos vacantes y conozca cómo es la vida en Two Chairs.