CMR INSIGHTS

 

Cautious Adoption of AI Can Create Positive Company Culture

by Joseph Pacelli and Jonas Heese

Cautious Adoption of AI Can Create Positive Company Culture

Image Credit | Proxyclick Visitor Management System

Four rules to ensure successful use of AI in the workplace.
  PDF

In 2014, Andy Haldane, then chief economist of the Bank of England, laid out a vision for global financial surveillance systems.1 “I have a dream. It is futuristic, but realistic,” he said. “It involves a Star Trek chair and a bank of monitors.” He was dreaming of technology that could track the global flow of funds in real time, much like technology used to globally track the weather or internet traffic. Many have said that Haldane’s speech, detailing his hope for high-level supervision, launched the RegTech (regulatory technology) industry—a class of software applications for managing regulatory compliance. Less than 10 years later, RegTech that incorporates advances in AI (artificial intelligence) seems to be the solution for corporations managing hundreds of compliance regulations and financial risks and attempting to build a positive company culture that upholds a good reputation and attracts and retains diverse workers. However, as a relatively new field, AI is strife with opportunities for misuse. To successfully improve company culture, corporations must embrace AI cautiously.  

Related CMR Articles

“The Feeling Economy: Managing in the Next Generation of Artificial Intelligence (AI)” by Ming-Hui Huang, Roland Rust, & Vojislav Maksimovic. (Vol. 61/4) 2019.


The National Artificial Intelligence Act of 2020 defined AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” Slowly, AI has become a crucial element of our work lives. Companies commonly use AI technologies in marketing, operations, customer service, risk management, supply chain operations, and more. Important to company culture, AI can and has been used to screen potential employees and monitor their behavior once they’re on board, through a related technology referred to as human resources technology or HRTech. With an increase in compliance regulations as well as public monitoring, a corporation’s reputation is central to its bottom line, and doing business with integrity is key to its reputation. We argue that the most important and effective way for any organization to create a strong culture is through its people, and that AI can successfully be used to both select and monitor employees. But companies are still trying to figure out how to do so. In a survey run by Accenture, 84 percent of C-suite executives said they believe AI must be leveraged to achieve their growth objectives. However, 76 percent said they struggle with how to implement and utilize AI.

Solving Screening Challenges

A typical job interview lasts between 45 minutes to one hour, giving managers little time to figure out if an applicant will be a good fit. To be more confident in building company culture, hiring managers need more data. Powerful new AI tools are emerging to help managers see a fully rounded picture of an applicant before making the choice to hire or not.

One such tool is Fama—an AI screening software that goes beyond the typical background check. As social media began its quick rise in the early 2000s, so rose a slew of new PR and HR problems. Today, it’s common to read a news story about a company leader who got into hot water over a problematic tweet or Facebook post. Fama does the social media deep dive on potential employees before their posts, past or present, become a problem. The company promises to “surface a range of harmful online behaviors at the point of hire to protect your business, reduce workplace toxicity, and reinvent your organization around a human focus.” An example of a post Fama would flag, according to the company’s website, is: “Best way to leave a job is to go out with a bang.” 

Fama’s technology certainly offers a lot of promise. Yet, the difficulty with such technology is that people generally balk at the idea of a company monitoring behaviors not directly related to their work. A Fama background check may seem invasive to prospective employees, or they might worry that the screening tool is overly sensitive or unreliable. After all, machines aren’t perfectly unbiased decision makers and have been known to make errors or introduce their own biases when predicting bad outcomes. For instance, some research suggests that banks employ software to screen credit card applicants and inadvertently favor well-to-do white people.2 Fama states that it follows all the required compliance standards and promises fair and accurate flags. Nevertheless, when employing this type of screening tool, it’s important that corporate leaders still carefully consider what the software flags as problematic.

Another emerging type of AI tool helps companies attract the right employees from the start. In one of our research studies, we examined nearly 25 million job postings between 2010 and 2020 and found that cultural information has a more pronounced effect on attracting workers, especially in recent years following prominent social movements such as Black Lives Matter.3 Today’s candidates are often looking for companies with a purpose and values they identify with. Misrepresenting company culture in job postings can be costly, leading to reputation damage, poor employee matches, and turnover. Crafting job postings that strongly convey the company’s culture not only attracts applicants but attracts the right applicants.

For example, a company called Textio analyzes the language used in a job posting, points out which words are more attractive to what groups of people, and compares the language in the postings to the company’s competitors. Unconscious biases may lead to job postings that convey a different type of culture than the company is trying to build. Textio finds that Facebook job ads frequently use the phrase “hungry for,” which leads to more men applying. Hulu, on the other hand, frequently uses the word “humility” in job postings, leading more women to apply. A Textio report both flags language that may not be in line with the company culture and offers suggestions. Instead of the masculine-leaning “driven by,” for example, Textio suggests “inspired by,” which tends to resonate more with women. The AI program also helps with showcasing empathy and respect through job postings as well as avoiding cliched language, flagging implications of a fixed mindset, and giving tips like “needs fewer ‘you’ statements” and “add a few exclamations.”

Improved Monitoring

Even with the best screening tools, bad apples slip through the cracks. AI technology can step in here too, to identify and monitor those bad apples. Typically, employee monitoring falls either under a Top-Down or a Bottom-Up approach, and we’ve seen both good and bad examples in each.

An example of Top-Down monitoring software that turned out to be less useful for flagging employee misbehavior follows the concept of the popular movie Minority Report—it was intended to spot a violation before it happens. In the dystopian world of the movie, people can be arrested based on the prediction that they’ll commit a crime. While not as drastic, HSBC (the Hongkong and Shanghai Banking Corporation) explored an AI program that follows patterns in employee communication to assess risk and proactively investigate before problems start.4 The program looked at email metadata, flagging changes in communication behavior like when supervisors or risk functions started receiving fewer emails than the norm or when teams had an uptick in emails outside of working hours. A case study we worked on detailed this AI tool at HSBC, with the bank’s global head of operational risk at that time arguing “with the volume of data being studied, it would be very difficult for anyone to ‘game’ the system.” Unfortunately, the volume of data became a problem. The AI tool highlighted so many suspicious interactions that internal audit would have been overwhelmed trying to follow up on all the (mainly false) red flags. While the tool turned out to be less useful to spot problems, it still helped leaders to understand the social dynamics within teams. For example, who is the person that other employees gravitate to? Who is less well integrated into the team? Perhaps, not surprisingly, our research also shows that banking and money laundering violations remain pervasive, potentially because AI is insufficient at identifying internal personnel problems.5,6

To be clear, this is not to say that AI and information technology are not important components of a firm’s monitoring system. In contrast, several corporations have successfully used ERP (enterprise resource planning) in monitoring operations. ERP systems provide a centralized platform to track information about business activities from accounting, risk management, operations, supply chain, human resources, and more. Our research finds that a firm adopting ERP correlates to a significant reduction in facility-level violations and penalties, which cuts costs for the company.7 The standard use of ERP from our 5,733-facility sample reduced the dollar cost of penalties by 17%.

From the Bottom-Up approach, firms typically struggle when they attempt to analyze social media or other internal data to spot problematic behavior. Especially since 2020, when the pandemic forced most people to work remotely, online searches for “how to monitor employees working from home” increased by more than 1,000 percent. For example, the Wall Street Journal reported on a Florida social-media marketing company that installed software on employee computers to take a screenshot of their desktops every 10 minutes and record how much time they spend on different activities. Yet survey data from U.S. employees, previously reported on Harvard Business Review, finds that such monitoring increases employees’ likelihood to break rules like taking unapproved breaks and purposefully working at a slow pace.

On the other end of the spectrum, a company called Veolia, which has long struggled with various forms of misconduct throughout its international operations, has successfully used technology to encourage employees to send in leads and tips about potential problems, improving compliance. The simple insight that Veolia’s management had is that employees typically observe problems first, but are often reluctant to speak up. To empower employees to speak up about problems, Veolia partnered with Whispli, an anonymous two-way communication platform primarily used for corporate whistleblowing.8 Whispli is essentially a special email inbox that conceals the identity of the whistleblower and allows for attachments like photos, videos, and audio recordings. Importantly, the program removes metadata from attachments to keep whistleblowers’ identities truly anonymous. In a similar vein, recent research uses a machine-learning algorithm to measure a firm’s misconduct risk using reviews left on Glassdoor.9 As Glassdoor reviews include information on employees’ perceptions of the firm culture, CEO approval, business outlook, and diversity scores the platform offers excellent data for an AI program to assess misconduct risk. The algorithm employed in this study allowed firms to predict future misconduct before employees filed whistleblower complaints with external regulators.

On the large scale, there is great evidence that with advances in technology come greater propensity for corporate monitoring. For example, a paper we co-authored finds that the introduction of 3G mobile broadband access across the U.S. leads to a substantial reduction in violations and penalties from local organizations, which was mediated by increased social media activity.10

A Cultural Contradiction?

At first blush, it may seem that our stated goal to improve company culture is in direct contradiction with our suggested means of AI technology. The building block of any good company culture is trust, and people have been outspoken about their distrust in AI. An article previously reported for HBR states that 67% of c-suite executives say they are “not comfortable” accessing or using data from advanced analytic systems. Instead, they have always preferred to rely on their guts for decision making. And, as stated before, employees have proven to react negatively to known AI monitoring. Clearly, there is a line where AI crosses from helpful to invasive, so using AI requires a careful, ethical approach. AI can help to measure problems and be a tool in building a good company culture, but it cannot substitute for good culture. Knowing this, we’ve developed four rules of implementing AI:

  • Rule 1: Ethics first, technology second. In some of the instances of AI gone wrong we’ve mentioned, technological capacity clearly trumped ethical considerations. Taking frequent screenshots of remote employees’ desktops invaded their privacy and caused employees to behave badly.

  • Rule 2: Avoid data overload. In the case of HSBC, risk management used massive amounts of data because that data was available and got lost in the forest for the trees. Without carefully considering which data points will be most helpful, data collection may only serve to confuse further. A company needs to be mindful about the type of data it wants to use to avoid noise in the signals it receives.

  • Rule 3: AI should not reinvent the wheel. AI technologies are meant to be a complement to existing business practices. In the Veolia example, employees already knew where the problems were, but had no easy and consequence-free way of reporting them. The introduction of the Whispi platform gave them the confidence to speak out.

  • Rule 4: Machine learning requires manager learning. By definition, machine learning involves iterations and improvements over time. Managers should take the same path. How can we expect the machine to learn if the company doesn’t learn? Establishing a new technology requires management training. In fact, our research on ERP implementations shows that new technology is most successful when employees adapt and learn a new way to work that incorporates the new technology.

If choosing a futuristic world in which to live, we would choose Star Trek over Minority Report – but with some caveats. A big chair and a bank of monitors using data and technology to help our society find and track threats sounds much better than using predictive AI to charge people of crimes they have not yet committed. However, even the Star Trek model can fail without strong leadership. A thoughtful balance of data and ethics is ultimately critical for the successful adoption of AI.

References

  1. Dey, A., Heese, J., & Weber, J. (2019). Starling Trust Sciences: Measuring Trust in Organizations. HBS No. 9-120-006.

  2. Blattner, L., & Nelson, S. (2021). How Costly is Noise? Data and Disparities in Consumer Credit. Working Paper.

  3. Pacelli, J., Shi, T., & Zou, Y. (2022). Communicating Corporate Culture in Labor Markets: Evidence from Job Postings. Working Paper.

  4. Dey, A., Heese, J. & Weber, J. (2019). Regtech at HSBC. HBS No. 9-120-046.

  5. Pacelli, J. (2019). Corporate culture and analyst catering. Journal of Accounting and Economics 67 (1), 120-143.

  6. Gao, J., Pacelli, J., Schneemeier, J., & Wu, Y. (2020). Dirty Money: How Banks Influence Financial Crime. Working Paper.

  7. Pacelli, J. & Heese, J. (2023). Does Information Technology Reduce Corporate Misconduct? Working Paper.

  8. Dey, A., Heese, J., Godwin, C., & Weber, J. (2021). Whistleblowing at Veolia: A Technology Solution. HBS No. 9-122-050.

  9. Campbell, D., & Shang, R. (2023). Tone at the Bottom: Measuring Corporate Misconduct Risk from the Text of Employee Reviews. Management Science 68(9), 7034-7053.

  10. Heese, J., & Pacelli, J. (2022). The Monitoring Role of Social Media. Review of Accounting Studies, forthcoming.



Joseph Pacelli
Joseph Pacelli Joseph Pacelli is Gerald Schuster Associate Professor of Business Administration in the Accounting and Management Unit at Harvard Business School. Professor Pacelli’s research covers topics related to capital market gatekeepers such as financial analysts and bankers, culture and diversity, and new technology.
Jonas Heese
Jonas Heese Jonas Heese is the Marvin Bower Associate Professor of Business Administration in the Accounting & Management Unit at Harvard Business School. He is an award-winning researcher covering topics on corporate misconduct, with a special focus on the role of regulators, whistleblowers, the media, and organizations’ compliance systems to prevent such misconduct.

Recommended




California Management Review

Berkeley-Haas's Premier Management Journal

Published at Berkeley Haas for more than sixty years, California Management Review seeks to share knowledge that challenges convention and shows a better way of doing business.

Learn more
Follow Us