Artificial intelligence and big data: a double-edged sword for risk management and internal audit

By Peter Brady, Principal, Risk Consulting, RSM US LLP

Napoleon Bonaparte once said that an army marches on its stomach. Failure to provide troops with the right food at the right time and the right place could result in military disaster.

Today’s businesses are also in growing need of a strong supply chain — not so much one of material goods, but of models and data. The fourth industrial revolution — this time, a digital one — is now fully underway.

Artificial intelligence (AI), once squarely in the realm of science fiction, has now become a business tool that can yield significant competitive advantage. Machine learning, a subset of AI, is a technique that trains computers to make decisions based on logical algorithms. Patterns of actual outcomes across large amounts of historical data instruct the logical rules in order to predict outcomes in new data.

Today, smart algorithms approve loans, provide interactive chat support, select and suggest products for us to buy, review contracts, price insurance, change traffic patterns and predict the weather.

Advances in processing technology mean that automated decision making will soon be capable of handling highly complex, real-time situations infinitely faster—and arguably, with better outcomes — than a human can.

This new era presents a double-edged sword: on one hand, AI presents a new set of risks requiring careful management; on the other hand, it has generated novel tools that advance auditors and risk managers in their mission to protect the enterprise.

The risks

If the supply chain producing the data that fuels decisions is ineffective, unreliable, unavailable or unsecure, the automated decisions and subsequent transactions will be flawed: open to error, manipulation and even fraud. Similarly, decision-making algorithms trained with bad data will make wrong decisions repeatedly, destroying any business advantage. The data used for training can contain errors, be incomplete, unrepresentative of the full population, or even mirror the unconscious or conscious human biases of the developer. In addition, there are certain decisions that have consequences for the safety, well-being and health of a human population, raising potential concerns from an ethical standpoint. Internal auditors and risk managers each require a high level of knowledge about how AI works and the importance of good data hygiene to be effective in this new environment. This requires applying the optimal blend of skepticism and challenge to identify the right balance of risk and control in order to subsequently provide appropriate advice to management.

The opportunity

There is the promise that AI can provide a risk manager or internal auditor with a set of tools to make objective, increasingly accurate predictions of risk. AI, if adopted well, enables a more informed business conversation about risk. Consider the ability to:

  • judge whether an employee is likely to make bad choices
  • anticipate when someone is likely to commit fraud
  • forecast when a business risk will result in a significant loss, or a control fail
  • predict cyber breaches
  • determine when a key project will fall behind schedule
  • assess whether a new product will fail to deliver the promised benefits

The near-term future for risk managers and auditors will demand a significant change in skills. Teams will need to be versed in data science, statistical modeling and technology, as well as in how to apply these techniques in a risk context. However, perhaps the primary need is to establish the same passionate drive to innovate that powered businesses to adopt AI in the first place. If risk managers and internal auditors fail to grasp the potential of new technologies to reshape their work, they will quickly become redundant.

Napoleon’s imperial dreams floundered during the Russian campaign: despite his famous maxim, he overextended his supply lines, the enemy implemented a scorched earth policy and a deadly winter set in.

Securing your data supply chain, ensuring the risks in AI development process are well-managed and increasing your team’s “AI IQ” are key to success in this fast-moving environment.

Do nothing in this space or move too slowly…at your peril.

About the author

Peter Brady is the leader of the RSM’s business risk consulting capability and financial services practice for consulting. He has over 30 years of experience in both audit and consulting, specializing in governance, risk and controls. Today, he focuses on developing strategy, growth, client development and building the right culture in his teams. Contact Peter at peter.brady@rsmuscom or 212-372-1880.

For more information, contact Ken Jenkins, Partner, Tax Services at ken.jenkins@rsmus.com or 513-619-2863.

RSM US LLP is a Goering Center Sponsor, and the Goering Center is sharing this content as part of its monthly newsletter, which features member and sponsor articles.

About the Goering Center for Family & Private Business
Established in 1989, the Goering Center serves more than 400 member companies, making it North America’s largest university-based educational non-profit center for family and private businesses. The Center’s mission is to nurture and educate family and private businesses to drive a vibrant economy. Affiliation with the Carl H. Lindner College of Business at the University of Cincinnati provides access to a vast resource of business programing and expertise. Goering Center members receive real-world insights that enlighten, strengthen and prolong family and private business success. For more information on the Center, participation and membership visit goering.uc.edu.