Grok AI Generating Fake Nudes, Racists Posts, & Viral Food Safety Violations: #9to5Nightmares ep 21

It’s now 2026 and with the new year comes new misconduct scandals breaking news. This month, #9to5Nightmares hosts Amy Warren and Micole Garatti focus on several high profile social media scandals around the world. 

Today’s tech-savvy digital world means new types of misconduct can go viral and scale quickly. January’s episode starts by unpacking a new viral trend in which people are using X’s AI Grok to generate non-consensual sexually explicit photos of women and children, creating widespread concerns over online privacy, human rights, and workplace misconduct concerns. 

The episode continues exploring additional scandals including a celebrity doctor being under review by medical boards over social media misconduct, a barista going viral for serving a customer a drink she made with her bare hands, and findings from a recent Fama report in Financial Services.  

These cases emphasized the importance of robust policies and protocols to prevent workplace harassment and misconduct, while highlighting the need for organizations to review and update their codes of conduct and harassment policies.

X Users Create Non-Consensual Sexually Explicit Content With X’s AI Grok

A new social media trend is going online causing massing privacy, human rights, and workplace misconduct concerns as X users are using X’s AI Grok to generate fake, non-consensual sexually explicit photos of women and children at scale. 

Several researchers have found that among the sample size “nearly three-quarters of posts were requests for nonconsensual images of real women or minors with items of clothing removed or added.” Some accounts were generating up to 6,700 undressed images per hour.

Posts included things like: “make her butt even bigger,” “switch the bikini print to USA print,” and “give her a dental floss bikini.”

This is a moment for employers to revisit their code of conduct, social media policies, and retrain employees on the dangers and consequences of this harmful online behavior. (Guardian)

Viral Video Shows Barista Preparing Drink with Bare Hands

A Chinese milk tea chain, Chagee, has temporarily closed a location and terminated an employee after viral videos show the worker severely violating food safety standards by mixing drinks with her hands.

The footage captured the employee, in uniform, preparing a drink for a customer using bare hands, including handling ingredients without gloves, stirring the beverage by hand, and pouring tea over her hands into the cup. The company also added that the employee was using leftover ingredients shortly before closing time and confirmed her actions were in extreme violation of company safety protocols.

At a time when everything we do in public can be recorded and shared online, it’s important for employers to safeguard their customers and reputation from people who pose safety and reputational risks. (Channel News Asia)

Former Apprentice Contestant Faces Medical Tribunal Over Racist Social Media Posts

A doctor and former contestant on BBC’s The Apprentice in the UK is facing a General Medical Council tribunal over allegations of social media misconduct that could impact his ability to practice medicine. 

Regulators allege that the doctor shared posts pushing Holocaust denial, conspiracy theories about Jewish people, sexist remarks about gender roles, and racist language. 

The doctor also faces separate allegations that he provided a patient with a sick note while suspended from medical practice, raising additional concerns about professional conduct and regulatory compliance. (BBC)

Fama Finds Social Media Misconduct in Financial Services Screening

In a recent screening, Fama surfaced a concerning pattern of online misconduct from a candidate in Financial Services. Social media posts from the candidate included posts mocking the company’s customers, readers of financial services industry news outlets, as well as posts about a recent incident in which the candidate was arrested for animal abuse.

This case shows how aggressive and demeaning online behavior doesn’t stay online, and often has real-world consequences. Especially in regulated industries that rely on public trust and regulatory compliance, it’s important to do due diligence to ensure a positive and safe working environment. 

Mitigating Workplace Misconduct with Social Media Screening

In today’s hyper-connected, digital world, the lines between personal online behavior and professional life continue to blur, making organizations vulnerable to a new wave of misconduct that can go viral and cause swift, severe damage to reputation and public trust. 

From the weaponization of AI to generate non-consensual images, to blatant food safety violations, and the re-emergence of hateful rhetoric online, these cases underscore a critical need for organizations to proactively safeguard their workplaces and brand. It is no longer enough to react; companies must update their policies and implement robust due diligence strategies, like Fama’s social media screening solution, to consistently identify and mitigate employee and candidate risks before they become a nightmare scenario. 

For a deep dive into these scandals and practical steps your organization can take, listen to #9to5Nightmares Ep 21: Grok AI Generating Fake Nudes, Racists Posts, & Viral Food Safety Violations below or on Spotify.

If you want to understand how organizations are identifying online behavioral risk earlier and protecting employees, customers, and their reputation, explore Fama’s solutions while you listen.

9 to 5 Nightmares – Episode Transcript

Amy: Hi everybody. Happy New Year! It's 2026, and I'm Amy, and this is 9 to 5 Nightmares.

Micole: I'm Micole Garatti, and we talk about misconduct so you can avoid it.

Amy: And we're starting out the year off with something that I want everyone to just kind of pause and listen to what this is, because if you don't know about it, chances are you're going to find out about it sooner rather than later, and we want to give you the heads up on it. So what's happening right now on X, Twitter, people are using their AI to take photos of people, primarily women and children, and the AI is then generating nudes of them, and can adjust them, do different things to them, and then people are sharing these photographs online. It is a big problem right now, and there's nothing that's being done to stop this. So, Micole, jump in, because

Micole: Yeah.

Amy: Hang.

Micole: So, people have been using X's AI Grok to, like you were saying, take photos of perfectly normal photos of people and then alter them without the person's consent, and a lot of that is being done to women, children. And so what's happening is there have been researchers in Dublin, all over the world, who have been looking into this because they're starting to notice this happen more and more, and they were saying that, like, at nearly three-quarters, nearly 75% of the post Grok are around this, doing this to people. And so it's being done at scale. It's being done to all different kinds of people. This is really a mass sexual harassment and sexually explicit extortion of people, because they're… they're using them for specific harm. And just this is, this is what. Yeah, I just have nowhere. It's so, it's just disgusting.

Amy: Yeah, and I think for our listeners out there, the reason that this is, A, number one, it's important for you to just know that this is going on

Micole: And number two

Amy: It's gonna become even more important for you to be screening people's public social media, because you may now find instances of people sharing these kinds of images, engaging in these kinds of images, and not only do you want to know that somebody has a willingness to do this, and the risk that that could be in your workplace if they do something like this to

Micole: Yeah.

Amy: A colleague, a fellow employee, even if it's not even in the workplace, but just that they're doing it, and then they send it to somebody in your workplace. That is huge, huge harassment exposure, huge level of risk. And we really want to make sure that everyone's understanding this, that this is going on.

Micole: Oh, yeah.

Amy: You know, not only does this raise the definite concerns of, yes, you should be screening your candidates, right, that are coming in, but this is also, in my mind, a five-alarm fire for risk because of the scale that's happening. You hear about what Micole said up to 75% of the images generated on this tool or this type of content.

Micole: Yeah, and when I'm talking about scale, Amy, one additional report from what I was reading said that, like, almost 7,000 photos can be created like this per hour. Per hour. Like, that's the level of scale that is happening. And they're just

Amy: People's likeness, right? So this is a photo of a real person. This is not

Micole: A real person that was not right.

Amy: Right.

Micole: Right, a real person that posted a regular photo of themselves that is now being turned into bikini pics, nudes, whatever the person, whatever the person wants it to be. Like, I'm struggling for words right now because it is just so disgusting. And the more I was looking into this, like I have had issues with people taking my image off of my personal accounts and using them for other places, and it's one of the reasons why I stopped sharing photos of myself on places. And I know that if I did have children, and one of them had this issue where somebody was doing this to them

Micole: That would be like, I couldn't even imagine. I couldn't even imagine.

Amy: We're talking about so many different levels of this, right?

Micole: Well, there's so many different levels.

Amy: There are, and I will say, you know, it's funny you're bringing your own personal experience. And maybe it's because of the work that we do, or just maybe because I'm in marketing, right? My husband and I made a personal decision years ago to not put any photos of our kids online.

Micole: It's smart. It's smart.

Amy: And people would criticize us for this, right? And we would just ignore it, like, whatever, right?

Micole: Right.

Amy: But now, like, I never would have anticipated that we would be in a place where, you know, 10 years ago, when we made this decision, right, that we would now be in a moment where something like this could happen. This never, ever, ever crossed my mind this would be the use case. So I think, you know, we're devoting a lot of time on this particular episode to this issue because I think it's a workplace issue, but it's also a human issue.

Micole: 100%.

Amy: I want everybody to be aware that this is going on and the level of risk associated with this in your workplace.

Micole: Yeah.

Amy: I mean, God, like, could you just imagine, like, you know, what's, what is the harassment gonna look like this week? It's, you know, you just got an email from, you know, one of your managers with a complaint because, you know, one of their coworkers is posting nudes of them, or sexually

Micole: Oh my God.

Amy: Images of them, and what if they put it on a public Slack channel? I mean, like

Micole: Right.

Amy: These are steps that are not far from this. This is not something that, with the scale of this that's happening right now, it's not we always say this it's not a matter of if, it's a matter of when. And so, you know, everyone that we're talking to right now who's listening to us, you know, in the HR space, be mindful of this. Start having these conversations internally that this could definitely be a possibility and a challenge. And I would say, too, you know, all of your policies should already be hitting on this. This should not

Micole: Right.

Amy: be an issue at all with your policies. And Micole and I talk about this all the time. You know, dust off that code of conduct.

Micole: Yeah.

Amy: Make sure your social media policy, make sure your harassment policy there's no loopholes in this.

Micole: No.

Amy: Inside Counsel right now.

Micole: Yeah.

Amy: Meeting on your inside counsel's calendar, and just sit down and talk to them, and just make them aware of this risk, because it's only going to continue to grow, unfortunately, because right now, nothing's being done to stop it. And here's the sad part. The sad part is, it's not hard to make it so that these images can't be made.

Micole: Yeah, no, it's not hard at all. And I think the other thing, too, is, like, this could also be a moment where HR says, hey, maybe we re-chain our employees, maybe on what our code of conduct is. Maybe we sit everyone down, and we say, if you do this, this violates these 17 policies in our handbook, and you will be terminated immediately. You know, you need a protocol for this. Maybe you need to remind people of what the protocol is that you already have, which should already cover this. But it's definitely a moment to stop and think and check in legally, check in with your employees, because, like Amy said, it's not a matter of if, it's a matter of when.

Amy: And I also think, too, you know, if you're in an organization that's large enough that you have a head of risk, you should also be talking to your head of risk about this, too. Yeah. And, you know, this is not a shameless plug for what we do, but this is exactly why we're here and why we exist. And, you know, I would love it if we didn't have to do what we do at scale, but this is the reason why we have to do what we do at scale. And this is why Cole and I do what we do on a daily basis, because we're so committed to making sure that everybody has an opportunity to work in a safe workplace. And so, you know, if you want to learn more about what we do, how we can help you identify these things, you know, send us a comment on LinkedIn. You know, we're tagged on all of this on LinkedIn. You know, send an email to sales at fama.io. Go to our website, fama.io, fill out a demo request form, and, you know, maybe you'll get to talk to the other who is on our sales team, who's awesome, or Avi, or Anthony, and they can talk to you a little bit about this. Because I think, like, if you're thinking about, hey, what's my plan for 2026? What are the things that I need to be aware of? What's going on? If there's one thing that I think you need to be investing in this year that's going to ensure that you're reducing risk, and also, at the same time, increasing quality of hire, you're reducing your cost per hire, right? Because you're not ending up having to remove somebody who you thought was a good hire. For such a small amount of money, you are getting so much reassurance that the people that you're bringing into your organization, or the people you have in your organization, are not exposing you to this level of risk. So

Micole: Yeah.

Amy: Yeah, Joe, just gonna end there on this. We'll go on to the next thing, because we always have things that are serious, but we always have things that are a little bit funny. And what I'm thinking about funny, Micole, is the barista story?

Micole: Yes.

Amy: Which is, like, ridiculous.

Micole: Oh, it's oh, it was so gross, Amy. It was so gross.

Amy: Now, I didn't see this, but after this discussion, I'm gonna go after this, and I'm gonna go look this up, and I'm sure that our listeners

Micole: Send you the link. I'll send you the link. Yeah, no, it was disgusting. So, okay, I'll just go into what happened. So, a Chinese milk tea chain, Chagee, I'm not sure if I'm saying that right, somebody correct me in the comments if I'm not, but they had to legitimately close a location, like a store location, and fire an employee because the employee was caught on a viral video using her hands to make the drink. And also, it came out after that

Amy: Is it hot? Was it hot or cold?

Micole: No, it was cold. It was like ice. It was like an iced coffee, essentially, and then they had, like, their hands in it mixing it. It was so disgusting. It was so gross, Amy. And then also, it turned out that the ingredients were, like, leftover ingredients from somebody else's drink. It was, like, all kinds of gross. It was so gross. It was disgusting. And a customer, like, recorded this, like, her, like, fully, like, and then handed it over, and the company was like, I don't even like, this is disgusting. Like, like, they came out and said, we have machines that are supposed to do this. Why was she even doing this? Like, they don't even know! And so they had to shut the entire store down, retrain people, clean everything, and then reopen.

Amy: I, like, there are, there are no words. Like, I will say that, you know, I've never had the opportunity to work in, like, the food service area, right? But even as a person who doesn't work in food service, but just has maybe, like, common sense, right? Or, you know, I mean, I guess

Micole: Would never.

Amy: I would never put my hand in my own drink like that, you know? Like, not to mix it, not to do anything. I mean, the only thing I can even think of is if, God forbid, I dropped something in my drink, and then I needed to get it out, but then I wouldn't finish drinking. I don't even

Micole: But you wouldn't hand it over to a customer.

Amy: No! No, that's what I'm saying. Like, like, this is just so wild. And by the way, okay, so let's, like, bring this all back. So now, this gets posted on social media, right? Now, granted, this is over in China, so, you know, for our audience that's primarily based in the US and in North America, you know, you're probably not gonna come across this particular person online, but however, there are so many instances, whether it's all those DoorDash drivers that we were having that were crazy

Micole: Oh, yeah.

Amy: Really.

Micole: Yeah, 100%.

Amy: You know, anytime you have an employee that's doing things for the public or in the public space, this is an opportunity for them to be recorded and for that to be posted.

Micole: Yeah. Online.

Amy: And create a viral moment. And so, you know, I think it's about, like, hey, do you want to hire this person again? Or at least you want to be aware, because if you're hiring the viral teaser with her hands, and you're in a food service industry, chances are that's going to negatively impact your business.

Micole: Checking that.

Amy: You may end up in this kind of a situation. I, I don't know, like

Micole: This is, again, it goes up with, like, the police officers doing

Amy: Father and duties, and it was like, I just, like yeah, I

Micole: Yeah, like, even if you make your drink like that at home, like, you don't serve that to anyone else.

Amy: You know, we just went from something, like, super, super serious that I want everyone to take serious, but then to this, and I will say, right, like, the theater of the absurd in what we do on a monthly basis

Micole: Oh my God.

Amy: I'm always here for this. I'm always here for this, and I hope, like, our listeners are, too. We didn't see anything about Christmas parties. That was my shocker when we were doing this. And the team was pulling all of the data on this, is that we did not see any holiday party social media craziness happen, or at least it just hasn't come out yet.

Micole: Maybe we haven't seen them yet.

Amy: That's what I yeah, yeah.

Micole: Yet, maybe. We'll see. We'll see when we come back in February.

Amy: So now, we're gonna continue our globetrotting, and we're gonna bring it over to the UK, where we had this really interesting issue with a contestant of The Apprentice in the UK. So

Micole: Former contestant. Yeah, so, okay, so a medical doctor who was practicing medicine just happened to be on BBC's version of The Apprentice a couple years ago. But in his social media history, he's got several different posts where he's, you know, think, like, women back in the kitchen, and why are, you know, like, and racist comments, and Holocaust denial, and very all this guy hates everyone. He doesn't like anyone. And he's now under investigation by the General Medical Council Tribunal, because they're like, how can he, if he, if this is what he believes, and this is what he's publicly saying out to the entire world, how is he gonna be a fair and partial doctor to all kinds of people who may come to his practice? And if we talk all the time about patterns of behavior and how online behavior has real-world consequences, this doctor also turns out was writing sick notes for patients while he was suspended. His license was suspended. And so now, he's under additional investigations for different types of compliance and other risks involved in the medical space, because he wasn't just saying that he doesn't believe certain people are human and deserve the same rights, he's also showing bad judgment in terms of when he's able to legally practice medicine and when he's not. And so, yeah.

Amy: You know what I think is really interesting here? If you've got, you know, over in the UK, it's the medical tribunal that decides whether or not you can

Micole: Yeah.

Amy: Right? Because, you know, being a doctor, it's just not about education. You have to be licensed, and there's people who weigh in on whether or not you can hold a license. And I think the interesting part is you now have this governing body that is saying, we're reviewing your social media posts, and we're taking that into consideration on whether or not we think that your behavior warrants you to have the behavior that's becoming of a doctor, right? Right. And how can that behavior negatively impact your ability to practice medicine? So I think, like, in our

Micole: Right.

Amy: Eye level, this is something that we're seeing, where people are starting to take what's happening online

Micole: Yeah.

Amy: and making judgments about how someone's online behavior is going to affect their ability to perform in the offline world.

Micole: Yeah.

Amy: This is just the continuation of a trend that we're not gonna see go away, especially because we've got, you know, two things. One, people spend more time online than they do offline, and then number two, you know, we talk about this a lot, the percent of the workforce now that's digital natives. And, you know, if that's a new term for you, all that means is it's somebody who was born after the internet was the internet, so they've been online pretty much all their lives. So, you know, there's an increase of not just activity that's happening online and where people spend their time, but also the depth and breadth and length of time that you have to look at somebody's pattern of behavior. So, you know, this is something we're just going to continue to see more of, and if, you know, this is the first time that you're hearing about this or learning about this, this is why we're here, right? This is part

Micole: Right.

Amy: of why we do this. So now we have another doctor who's behaving poorly. If you're wanting to hire doctors and you're not aware of this, all of this will not come up on a traditional screen.

Micole: No, no, no. Unless he was convicted or had some kind of legal problem, and right now, that's not the case. It's in limbo, right?

Amy: Right, exactly. So, right now, this person, you know, can be easily hired somewhere, and the likelihood of this being, like, found out is small if you're not screening their online activity. So, you know, this is just more of all of the things that we're seeing, and then I think, like, just to round it out, what we've been finding the last couple of weeks in the reports that we've been running is we're just seeing misconduct in financial services.

Micole: Yes.

Amy: People in financial services behaving really poorly.

Micole: Yeah. Like food service and healthcare, financial services is also a regulated industry, and so, especially in places like the US and the UK, these industries are heavily regulated in terms of both financial misconduct and also now non-financial misconduct. And so, in one of the reports of a candidate, we found several different, a lot of different posts, a pattern of posts. Some of which were just general FUs to everybody. Some were saying, mocking customers of the organization they were at, mocking readers of certain publications that are the audience of their company that they're at and a representative of. In other cases, they also, and this is more serious, spoke about them being arrested for animal abuse. And so, it just showed a pattern of behavior, of aggressiveness, of disrespect, and regulatory and safety issues.

Amy: And I think, too, in this particular instance, you know, disparaging

Micole: Yeah.

Amy: Customers.

Micole: Yeah.

Amy: You know, no matter the business that you're in, you know, having somebody who's employed by you publicly posting and dropping F-bombs left and right against your customers

Micole: Right. Nobody, nobody wants that, and especially

Amy: In the financial services industry, when, you know, the value of an individual customer, in many instances, is so, so, so high.

Micole: Right.

Amy: So, there was a lot going on in this particular report, and, you know, if you just took one of these instances outside of the, you know, the legal around animal cruelty, putting that aside, if you took maybe one of these instances or two of these instances and you looked at them in isolation, some of them maybe you wouldn't be too happy with it, maybe it wouldn't be the best thing, right? Maybe you would probably have a conversation with somebody about it.

Micole: Right.

Amy: And you may decide to offer them employment. The difference in this particular situation is when you see the volume

Micole: Yeah.

Amy: And choices that are being made, you can make a determination that, you know, what is the likelihood and what are the chances that I'm gonna be on the receiving end, my company’s receiving end, of this kind of behavior, because there's such a pattern of behavior.

Micole: And I think, too, I think the violence and the threats

Amy: You get something really important. You know, we saw this a lot last year with our misconduct report that came out, that, you know, people were starting to move from not just saying things, but acting on it at a higher rate.

Micole: Yeah.

Amy: Like, that should be definitely a warning sign, because we know that

Micole: Yeah.

Amy: We're seeing an increase of people going from, "I'm just saying that I'm gonna do this," to actually taking action on it.

Micole: Yeah.

Amy: So anyway, this is where we are for the beginning of the year. It's gonna be an interesting 2026, and we're gonna keep coming to you. We're gonna make sure that we can try and get ahead of things so you're aware. That's our goal this year. Just like we talked in the beginning, we want to be able to keep you ahead of what's going on so you know and you can take preventative measures to make sure that when it does happen, you've got a plan, you've already thought about it, or you've already mitigated that risk. So that's what we're looking for. And I'm Amy.

Micole: And I'm Micole Garatti, and we talk about misconduct so you can avoid it.

Amy: See you everybody next time. Bye.