Career and Workplace

Lessons From a Trust Crisis: 'AI Made Me Do It' Is No Excuse

A hiring platform's AI misstep sparked outrage, regulatory scrutiny, and a trust crisis. Sandra Sucher explains how the company recovered and suggests six questions every leader should ask to use AI responsibly and maintain public trust.

A large red cliff formation extending over a sea of teal abstract glowing network lines.

It’s an executive’s nightmare: A product violates privacy laws, drawing regulators’ ire and igniting a media scandal. And when AI contributes to the mistake, the public’s trust can be even harder for the company to restore.

That’s what happened to Kentaro Mori, who led risk management in 2019 at Recruit Holdings, the Japanese human resources technology company that owns popular job-search sites Indeed and Glassdoor. He woke up on his birthday expecting well wishes and instead found his cell phone ringing about a subsidiary that was accused of running students’ data through an algorithm—without their consent—to predict whether they would accept jobs, then sharing those predictions with hiring companies.

You can offend the public deeply by using AI in a way they don't approve of.

Early in AI’s adoption, the lapse in judgment hit a nerve. Students were outraged, Japan’s privacy commission admonished the company, and the public cried foul, saying Recruit’s algorithm unfairly excluded candidates from being considered for positions.

“You can offend the public deeply by using AI in a way they don't approve of,” explains Harvard Business School Professor Sandra Sucher. “A good answer will never be, ‘AI made me do it.’”

Sucher studies how companies can build trust, including through responsible AI use, and finds a ready roadmap in the way Recruit acknowledged its mistake, apologized, and ultimately recovered from the misstep. She chronicles the company’s journey through an HBS case study and supplement, “Recruit Holdings Co. Ltd.: Managing Innovation and Trust in the Age of AI” and B, which raise important questions as companies move to incorporate AI into their operations.

“What kinds of decisions are people comfortable having an algorithm make on behalf of the firm? And what kinds of decisions should firms be making themselves in order to be held accountable for their actions?” asks Sucher, the MBA Class of 1966 Professor of Management Practice. “These are forever questions, but AI has amplified them because of the number of decisions that it can help with, and the current rush to adopt it. Companies appear to be turning over responsibility for decisions that they nonetheless are still on the hook for.”

Sucher outlines six key questions executives should ask to make sure their companies’ use of AI earns—and maintains—public trust.

1. Are you considering the public’s interests?

In the 1980s, Recruit’s leaders were involved in a shares-for-favors scheme that tarnished its reputation, and Sucher used the scandal as an example in her book The Power of Trust: How Companies Build It, Lose It, Regain It to show how companies can improve their culture.

“Recruit recovered through a decades-long process of building a culture that focused on a mission of adding value to society,” empowering employees to meet the needs of customers, the case says. That shift helped move the company away from perceptions of greed toward an identity shaped around service and giving back.

“The heart of being trustworthy is that you take the other person's interests into account,” Sucher says. She ultimately expanded on the book’s description of the AI scandal with the HBS case and supplement, which she wrote with Marilyn Morgan Westner and Bethelehem Y. Araya, research associates at HBS.

2. Is AI making the wrong decisions?

By 2019, HR companies were growing more comfortable using AI to help employers automate repetitive tasks, including screening applicants for jobs. Recruit’s Rikunabi subsidiary, a job-search platform with 30,000 companies and 800,000 students, faced a challenge brokering the fragile moment when young adults launch into their first jobs, a particularly fraught time in Japan because of the culture of lifetime employment.

Rikunabi heard from employers that students accepted offers only to reject them later, a costly and time-consuming problem for companies. It designed an algorithm—and then launched it as a pilot with 38 companies—to help determine whether students were likely to decline based on their browsing patterns, including the company pages they visited most frequently.

This is best practice for trust problems: They used it as an opportunity to get better.

Companies were not supposed to base hiring decisions on the predictions, but still, the use of AI didn’t sit right with many. One Twitter user wrote, “It would be unbearable if something like this would automatically determine that the rejection rate for job offers is high and result in being rejected.”

“Students were looked at as highly vulnerable, and it just struck people as fundamentally unfair to have an algorithm, rather than the company, making the determination,” explains Sucher.

3. Are you taking responsibility when things go wrong?

Recruit quickly shut down the program. The public called for a scapegoat, with one Twitter user writing, “I think the person in charge of Recruit Career’s Rikunabi DMP Follow needs to do some serious self-reflection ….”

But instead of placing blame, Sucher commends Recruit’s leaders for supporting the team behind the controversial product. “Quite often when there's a scandal of this kind, people distance themselves from the person who's ‘caught.’ And in this case, they saw their job as helping that person navigate this scandal,” she says.

Recruit also went above and beyond in supporting affected students, Sucher says. They contacted companies using the service directly to make sure student data was deleted, apologized to each of the 7,983 students individually for sharing data without their consent, and set up a website so students could determine whether their data had been used.

4. Are you learning from your mistakes?

As Recruit executives learned more about privacy and AI, they realized they had a lot of work to do, Sucher says.

Across 300 products, 1,800 different privacy policies existed. Recruit recognized the failure to create a single system to consolidate those policies and decided that even experimental products in a trial phase would be held to the same standards.

“This is best practice for trust problems: They used it as an opportunity to get better,” says Sucher. “They organized a team and a process to create essentially one policy.”

5. Do you have the right guardrails for experiments?

After the 1980s scandal, Recruit partly rebuilt trust with a culture of bottom-up experimentation, says Sucher. Critics blamed that freewheeling ethos for insufficient oversight—after all, Rikunabi DMP Follow was a pilot service. In a nod to calls for more centralized oversight, Recruit consolidated seven of its separately managed companies into one.

But the company didn’t discard its culture of experimentation. In an interview during his retirement, Daizo Kobayashi, then-president of the Recruit Career business that oversaw the Rikunabi site, said executives had discussed the issue at length and had decided trials and pilot projects should still bubble up from the bottom.

“We continue to encourage the sprouting of ideas and challenges,” he said, but “we need to make sure we have the right rules in place.” Sucher applauds the way Recruit “doubled down on their culture of innovation and empowerment by pairing it with better governance that allows innovation to be done more safely.”

6. Are senior leaders crafting a solid AI strategy?

Companies that are incorporating AI responsibly aren’t leaving it only in the hands of the technology department—but many businesses are making the mistake of doing just that, says Sucher.

“People who are making these calls tend to be more chief technology officers, rather than the senior leadership of the firm,” she says. “If the senior leadership isn't involved in these decisions, they should be because these are strategic decisions that impact the firm.”

Elevating AI to a C-suite responsibility acknowledges that its use carries ethical stakes—and that customers expect companies to get them right.

Sucher concludes: “The biggest issue that I see right now in the AI space in general is that firms don't appear to be thinking, ‘What is the best use of this in my firm?’ AI should not and cannot govern itself. Companies need to have a point of view about what uses they're putting it to, and they need to have policies to make sure that's what's going on in the firm.”

Illustration by Ariana Cohen-Halberstam with images from Adobe Stock.

Have feedback for us?

Recruit Holdings Co. Ltd.: Managing Innovation and Trust in the Age of AI

Sucher, Sandra J., Marilyn Morgan Westner, and Bethelehem Y Araya. "Recruit Holdings Co. Ltd.: Managing Innovation and Trust in the Age of AI." Harvard Business School Case 324-025, October 2023. (Revised January 2026.)

Recruit Holdings Co. Ltd.: Managing Innovation and Trust in the Age of AI (B)

Sucher, Sandra J., Marilyn Morgan Westner, and Bethelehem Y Araya. "Recruit Holdings Co. Ltd.: Managing Innovation and Trust in the Age of AI (B)." Harvard Business School Supplement 324-026, October 2023. (Revised January 2026.)

Latest from HBS faculty experts

Expertly curated insights, precisely tailored to address the challenges you are tackling today.

Strategy and Innovation

Social Responsibility

Data and Technology