Experts at HMH Symposium Confront AI’s Ethical Crossroads in Healthcare   

Experts at HMH Symposium Confront AI’s Ethical Crossroads in Healthcare

November 25, 2025

HMH Symposium

Artificial intelligence (AI) promises to transform health care, but experts warn against its unchecked use as a threat to liability, patient equity, and the humanistic values in medicine.

The push-and-pull potential of integrating AI into clinical practices was the dynamic of the 27th Hackensack Meridian Health Bioethics Symposium, "Ethics at the Interface: The Future of AI in Health Care." The recent discussion, moderated by Hannah I. Lipman, M.D., vice president of the Hackensack Meridian Health Bioethics Institute, featured presentations from I. Glenn Cohen, J.D., Charles E. Binkley, M.D., and Meg Young, Ph.D.

Who Is Responsible?
Cohen, a Harvard Law School professor and deputy dean, offered a sobering analysis of legal challenges. He outlined the stark liability landscape, noting whether a clinician follows correct AI advice–or rejects bad AI advice–leading to a negative outcome, “You’re liable.”

Professor Cohen highlighted the technology's potential to exacerbate rampant inequality in the industry, pointing to "label bias" using AI models trained on skewed data solely to optimize cost. Biased data being used on the front end, he argued, can yield discriminatory outcomes on the back end.

While pointing out the potential for physician culpability in clinical AI matters, he argued hospital systems carry most liability. Cohen compared AI adoption more to "hiring rather than buying," a process that demands continuous auditing and improvement protocols as expected in human employee management.

People Over Model
Dr. Binkley, director of AI Ethics and Quality at Hackensack Meridian Health, presented a vision for AI rooted in preserving human connection.

Binkley traced his humanistic dedication to his calling into medicine during the AIDS epidemic of the 1980s and early 1990s, where he formed care teams to ensure (predominantly) dying men would not be alone in their final hours.

This unique vantage point framed his conversation around the essential question, "What makes us human?”

After laying bare the future concerns of “health care only for the super-rich,” responsibility for data stewardship by healthcare companies, and giving patients agency over what tools are deployed in their care, Binkley shared ideals of leveraging AI against administrative drudgery which will liberate clinicians to "spend more time focusing on being a counselor and advocate for our patients."

While optimistic about AI's potential, Binkley cautioned, "If we focus on the model and not the people, we are going to fail."

Answering Questions With Questions
Dr. Young, a senior researcher at the Data & Society Research Institute and expert in public sector AI adoption, addressed the critical need for governance and transparency.

She stated AI vendors are often not transparent about their models and highlighted the work of the Government AI Coalition in creating a “safety in numbers” approach to vetting them. While it is unrealistic to rigorously assess every system, she noted, the evaluation of high-risk systems is improving by auditing according to real-world settings.

The following Q&A session, driven by Dr. Lipman, delved deeper into these practical dilemmas.

Young called for a "minimum standard of responsible AI vetting" across all industries and for better "notice and choice" for AI-patient deployment, similar to the E.U. 's General Data Protection Regulation (GDPR). Cohen backed baseline liability standards: "As long as you, the clinician, are in the loop, you are ultimately accountable and responsible for AI-aided decisions."

Dr. Binkley championed AI as a tool to enhance human connection, while Cohen offered a counterpoint, comparing interactions with some AI to a "toxic friendship." Young cautioned against AI chatbots programmed to mirror fake empathy and breed dependency.

The environmental cost of AI was another significant point of contention. Cohen starkly stated, "Every time you ask Chat GPT a question, a tree dies," while Young provided data from her research, noting AI's energy use is projected to triple by 2028.

What Makes Us Human
The panel’s closing thoughts brought a unified message of cautious progress. Cohen implored constant back-end iteration upon AI implementation. Young urged the audience to "stay skeptical" and "be cautious about where and when AI deployment is needed." Binkley summed up both statements, advising, "We can't let the hype override our measurement of benefit."

Dr. Lipman highlighted the attendance of medical students during their precious free time outside of class. “It shows how seriously HMSOM takes the ethical aspects of technology and the importance of preparing future physicians for an AI-influenced practice,” she said.

To coexist and develop with AI, Binkley closed by urging humans to repeatedly ask ourselves what makes us human, and answered his perceived rhetorical question with a two-word, matter-of-fact spoiler: “It’s love.”
We use cookies to improve your site experience. By using this site,
you agree to our Terms & Conditions. Also, please read our Privacy Policy. Accept All CookiesLearn More
X