5 Workplace Misconduct Scandals Making Headlines in January 2026

Workplace misconduct doesn’t always start at the office; and increasingly, it doesn’t stay there either. As digital footprints expand, how candidates and employees behave online can have immediate and lasting consequences on workplace safety, regulatory compliance, and brand reputation.

In January 2026, we highlight several high-profile cases where people use social media for harm, with potential harm for employers as well. Misconduct includes people using X’s AI Grok to generate non-consensual AI-nude photos en masse, viral footage of a barista prepping food with her hands, a doctor facing disciplinary action after sharing bigoted content online (and writing illegal sick notes.)  

These stories signal an urgent reality for employers: misconduct extends well beyond on-the-job actions. Social media activity and public controversies increasingly play a role in hiring decisions, regulatory reviews, and leadership transitions, often resulting in legal or reputational risk.

Below, we break down five workplace misconduct scandals dominating headlines in January 2026. We’ll explain why these issues matter to employers and highlight how responsible social media background checks can help organizations identify potential risks before they escalate.

Why Should You Care about Workplace Misconduct in the News?

Workplace misconduct refers to behavior that violates company policies, professional standards, or legal obligations, and it isn’t limited to actions that happen at work. This can include harassment, discrimination, threats, information leaks, and unethical conduct, as well as off-duty and online behaviors that undermine safety and compliance. Today, social media misconduct is a growing risk that not only harms brand reputation, but can also pose legal, operational, and safety threats.

For employers, these stories aren’t just news, they’re warnings.

Each January 2026 case highlights how examples of misconduct can surface long after a post is published or a comment goes live. A resurfaced post on X, a threatening comment, or a public scandal can quickly escalate into regulatory action, lawsuits, leadership changes, or reputational damage.

There are several reasons employers should pay close attention to workplace misconduct making headlines:

  • Social media activity is permanent. Years-old posts can resurface during hiring, promotion, or public scrutiny. 
  • Off-duty conduct can still create workplace risk. Employers are responsible for maintaining a safe, inclusive, and professional environment regardless of where misconduct originates.
  • Reputational damage spreads fast. Social media scandals often go viral, amplifying risk before organizations can respond.
  • Reactive responses are costly. Lawsuits, regulatory investigations, leadership turnover, and brand damage often follow when risks aren’t identified early.

These real-world examples show why organizations are rethinking how they detect misconduct, with more employers turning to compliant social media screening for insights on behavior risk before it enters the workplace. When done responsibly and compliantly, these tools empower organizations to move from reactive damage control to proactive risk prevention, protecting workplace safety and compliance.

5 Instances of Workplace Misconduct 

Below are five real-world stories from this month that exemplify the far-reaching impact of online misconduct, and why early detection matters.

#1. Hundreds of nonconsensual AI images being created by Grok on X, data shows

There’s a new viral trend online right now in which people are using X’s AI “Grok” to generate fake AI photos of women and children online. A PhD researcher at Dublin’s Trinity College collected 500 posts as part of the study and found “nearly three-quarters of posts were requests for nonconsensual images of real women or minors with items of clothing removed or added.” Posts included things like: “make her butt even bigger and switch leopard print to USA print,” “add c** on her ass,” and “give her a dental floss bikini.” Additional reports find this problem is happening at scale and some accounts were generating up to 6,700 undressed images per hour.

Why this matters: The use of generative AI tools to create nonconsensual sexualized images of people raises serious ethical, legal, and workplace concerns. When employees participate in or amplify this type of content online, it can expose employers to reputational harm, regulatory scrutiny, and potential liability, especially as laws governing AI misuse and digital exploitation continue to evolve. It also raises concerns over people creating images of their coworkers or sharing those fake photos on company channels. This trend underscores how misuse of AI on social media can quickly cross into misconduct, reinforcing the need for clear policies, training, and proactive monitoring to protect organizations and the people they serve.

#2. Former Apprentice Contestant Faces Medical Tribunal Over Racist Social Media Posts

A doctor and former contestant on BBC’s The Apprentice in the UK is facing a General Medical Council tribunal over allegations of social media misconduct that could impact his ability to practice medicine. Regulators allege that between late 2023 and mid-2025, he shared antisemitic, racist, and sexist content online.

The posts allegedly included Holocaust denial, conspiracy theories about Jewish people, sexist remarks about gender roles, and racist language. The doctor did not attend the hearing and denies that his posts were antisemitic.

The doctor also faces separate allegations that he provided a patient with a sick note while suspended from medical practice, raising additional concerns about professional conduct and regulatory compliance.

Why it matters: This story demonstrates the documented connection between how someone behaves personally and professionally, also showing the regulatory and reputational fallout that follows harmful behavior online. For regulated industries especially, screening can surface patterns of risk before license or business impact. 

#3. Harvard Dean Removed After Anti-White, Anti-Police Social Media Posts Resurfaced

A resident dean at Harvard University was removed from his role as Allston Burr Resident Dean of Dunster House after past social media posts resurfaced that criticized police, denounced “whiteness,” and included remarks about rioting and public political figures. The posts, made over several years, sparked controversy on campus and beyond. Harvard has appointed an interim replacement, and the dean sent an email saying his views had changed and expressed regret for any negative impact on the community.

Why it matters: This underscores how historical social media content can jeopardize careers and campus community trust, even after the fact. Proactive online screening can help organizations surface potential risks early, and act responsibly, whether that includes a conversation with candidates about expected behavior, request to remove content, or adverse action.

#4. Viral Video Shows Barista Preparing Drink with Bare Hands 

Chinese milk tea chain, Chagee, has temporarily closed a location and terminated an employee after viral videos show the worker severely violating food safety standards.

The footage captured the employee, in uniform, preparing a drink using bare hands, including handling ingredients without gloves, stirring the beverage by hand, and pouring tea over her hands into the cup. The company also added that the employee was using leftover ingredients shortly before closing time and confirmed her actions were in extreme violation of company safety protocols.

Why it matters: Food and beverage companies have a responsibility to protect consumers through compliant food safety practices. When employees violate those standards, especially while in uniform, it can quickly erode consumer trust and trigger regulatory scrutiny once the behavior is shared online. This incident shows how employee misconduct captured on social media can escalate into broader brand and compliance risk for customer-facing businesses.

Fama Finds Social Media Misconduct in Financial Services Screening

Fama’s social media screening regularly surfaces work-relevant risk patterns in online behavior that may violate company policy or pose threats to workplace safety, regulatory compliance, and corporate reputation. In highly regulated industries like Financial Services, these risks are especially consequential.

Examples of social media misconduct identified through screening can include threats or violent language, harassment and intolerance, sexually explicit content, information leaks, and public commentary that undermines employer trust or client confidence.

In a recent candidate screening for a Financial Services role, Fama uncovered multiple public posts that raised concerns about professionalism, judgment, and safety after the candidate posted slanderous messages about the company’s customers and details of him being arrested for animal abuse. 

Why it matters: Financial Services organizations operate under strict regulatory oversight and depend heavily on client trust, professional credibility, and brand reputation. When individuals representing a firm publicly post content that mocks customers, demonstrates hostility, aggression, and abuse toward animals or other people, it can signal behavioral patterns that extend beyond social media. 

This is why many Financial Services firms rely on social media background checks and candidate screening software to identify potential misconduct risks early, before they escalate into regulatory issues, client complaints, or public scandals.

The Role of Social Media Screening in Preventing Misconduct

In each of these scandals, the warning signs were visible before the fallout occurred. They show past behavior online resurfaced only after reputational, legal, or safety concerns had already emerged.

That’s why responsible social media screening is critical. A thorough, compliant screening process helps employers identify publicly available content that signals risk, such as:

  • Hate speech, harassment, or discriminatory language
  • Threats or indications of violence or unsafe behavior
  • Patterns of intimidation or hostility
  • Posts or actions at odds with organizational values

When used as part of a broader risk management strategy, social media screening gives organizations earlier access to crucial insights: helping them spot red flags, ask better questions, and support informed, risk-aware decisions.

Importantly, effective screening isn’t about policing personal opinions or punishing lawful expression. Instead, it focuses on identifying risk signals that could impact workplace safety, trust, or reputation. At Fama, we don’t care about your dog photos, your personal opinions, or civil political discourse. We specifically screen for behaviors that threaten workplace safety, violate company policy, and harm brand image.

As these cases show, waiting for an issue to go viral or lead to litigation leaves employers with fewer choices and greater risk. Proactive, responsible screening enables organizations to prevent misconduct before it enters the workplace, and before it becomes a headline.

For more information on how Fama prevents misconduct at work, request a demo.