How Social Media Screening Prevents Healthcare Misconduct

Hospitals, clinics, and mental health facilities are meant to be sanctuaries of healing. But when misconduct enters the equation, the result isn’t just professional failure—it’s trauma, tragedy, and in some cases, death.
Starting with financial costs, healthcare fraud and abuse costs providers $60 billion a year. Sexual abuse by just one doctor costed one healthcare facility $215 million. And 30% of state medical boards are fielding complaints for online violations of patient confidentiality.
But what’s worse is the personal impact to patients and the reputational damage to medical facilities. Recently, a hospital had to issue an apology after a nurse posted a picture of herself mocking newborn babies in the NICU, neonatal intensive care unit, on social media, causing a firestorm for the hospital.
It wasn’t that long ago that a paramedic was fired from her job after she posted photos of herself with unconscious, injured, and dying patients on her social media channels while she was working. One post, which the public shared with her employer, showed her flipping off the camera with an injured patient visible in the background.
Another disturbing example: a licensed mental health counselor encouraged one of his patients who was a survivor of abuse to return to sex work and then later responded to her online ad himself. The patient was so traumatized she immediately contacted the professional authorities. This shocking breach of ethics underscores how those tasked with supporting the most vulnerable can sometimes exploit that very trust.
There’s also the pediatrician at UF Health Jacksonville who was hired despite a long history of public misconduct allegations documented by former and current employers and patients. Complaints included racist remarks, misdiagnoses, and violations of patient privacy. Once on staff, colleagues raised serious concerns as the doctor continued to racially profile families—falsely reporting abuse in cases with no evidence and ignoring it when clear signs were present, based solely on race. Despite several reports from staff as well as social workers, university leadership failed to act. As a result, patients were not able to get the care they needed, children were wrongfully removed from their homes, families were torn apart, and lawsuits followed–including cases of wrongful arrest–all stemming from the doctor’s biased and deeply flawed decisions.
These aren’t isolated incidents. They reflect a deeper pattern of risk hiding in plain sight. Fortunately, social media background checks can provide healthcare organizations with the behavior intelligence they need to prevent healthcare misconduct like this.
How Common is Misconduct Among Healthcare Workers? [Statistics You Need to Know]
Misconduct in hospitals and healthcare are more common than you might think. According to Fama’s 2024 State of Misconduct at Work report, 1 in 15 candidates in the healthcare industry (6.5%) had warning signs of misconduct in their online presence. Healthcare candidates who had these misconduct warning signs, they averaged 12 posts with misconduct.
Of the misconduct flags detected in healthcare:
- 5% were sexually inappropriate content
- roughly 8% were intolerant
- and over 8% were threats or violence
These aren’t warning signs you’d find on a resume, in an interview, or even in a background check. But they are visible online—if you’re looking. This is why Behavior Intelligence in healthcare is so important.
Healthcare Misconduct Case Studies from Social Media Screening
Fama’s AI screens over 10,000 online sources as well as Facebook, X (Twitter), TikTok, Reels BlueSky, YouTube, and more to detect warning signs of misconduct in candidates. This behavioral intelligence is used by many healthcare systems to understand how a person may show up at work and identify risks before they’re hired. And there's a reason the top healthcare providers choose Fama.
Case Study: Psychiatric Hospital Managing Director
In one case, we uncovered a managing director of a psychiatric hospital with a criminal history of conspiracy, fraud, and kickbacks. While some charges were dismissed, the candidate had previously pled guilty to misusing public money and was fined and placed on probation. None of this was easily visible without a deeper review of public records and online behavior.
Case Study: University Health Science Center President
In another case, Fama’s AI flagged a university health science center president who oversaw a body donation program that accepted over 2,000 unclaimed corpses—without proper consent. More than 800 bodies were dissected and leased out to third parties, including those of military veterans. The institution ultimately suspended the program, but only after generating $2.5 million in revenue. The candidate resigned in early 2025.
Case Study: Doctor Selling Body Parts on X
As if that weren’t horrific enough, another Fama screening uncovered a doctor selling human body parts of comatose patients on X (Twitter)—using a public platform to traffic still-living patients.
These are people entrusted with not only care—but also with compliance, ethics, and oversight. And when those values go unchecked, patients, staff, and entire institutions pay the price.
How to Prevent Healthcare Misconduct Before Hiring Happens
In 2025, we’re not just living in the digital age. We’re living in the age of AI. Candidates are using AI to apply to jobs, write their resumes and cover letters, and even ace pre-hire assessments and interviews. Similarly, many misconduct behaviors are missed by traditional background checks as pre-employment background screening generally only catches misconduct that is recorded in the criminal justice system. This makes it extremely difficult for hiring teams to evaluate who candidates really are and how they will show up at work – leaving healthcare open to misconduct risks that aren’t caught in traditional hiring methods.
That’s where social media and online screening comes in. Unlike traditional hiring processes and background checks, Fama provides behavioral intelligence so employers are aware of the kind of misconduct warning signs that, if ignored, may become tomorrow’s scandal.
Misconduct isn’t starting at the hospital doors. It usually starts earlier—online, in public forums, in TikTok videos and Instagram Reels—shared without shame. By adding social media screening to the hiring process, healthcare organizations are able to close the gaps in traditional hiring solutions and evaluate candidates where they are living their lives day to day–online.
Using AI and public web, Fama’s platform detects early warning signs of misconduct—including threats, harassment, crime, and extremist behavior—before a candidate is hired. And it does so in a compliant, fair, and consistent way, helping healthcare organizations prevent harm before it occurs.
Behavioral intelligence is the missing layer of protection for an industry built on trust.
Healing Starts With Hiring Better
When misconduct goes unchecked in healthcare, the consequences aren’t just reputational—they’re life-altering. But by identifying these risks earlier through online screening, employers can uphold the integrity of their workforce, protect patient safety, and stop abuse before it starts.
Patients don’t get to choose who hires their caregivers. But HR and Talent Acquisition teams do. And as misconduct continues to rise in severity and visibility, it’s clear that the traditional hiring toolkit is no longer enough to prevent healthcare misconduct. Fortunately, HR and Talent Acquisition teams have new solutions available to close these gaps and help mitigate misconduct in healthcare before someone is hired as well as throughout the employee lifecycle.
Because trust in healthcare starts long before the first appointment—it starts with who gets hired.
Protect your patients, your reputation, and your organization. Click here to connect with one of our workplace misconduct experts today.