Compliance and Accountability Questions Posed n an AI World
- L Deckter
- Jun 19, 2024
- 2 min read
Compliance with laws and regulations. Accountability for actions and decisions. As a society, we may take these for granted, but the world is about to change in a way that makes those two bedrocks of civilization so powerful in bringing about confidence and trust.
Following the Great Financial Crisis of 2007-2008, bank regulations evolved to include clear accountability around the integrity of the financial statements from publicly traded companies. This was codified in law and known as the Sarbanes Oxley Act, or SoX for short. In essence, checks were made against the data by financial auditors outside the company, and the company’s management needed to personally sign that the information was accurate and correct. If they lied, the consequences were real: jail time for executives who made knowingly false statements on their financial reports.
At that time, financial reporting was largely manual. Humans were involved, and legitimate mistakes were sometimes made. Those mistakes would be corrected and the financial statements would be re-issued with the corrections made. What happens when the financial statement is no longer generated by humans but by Artificial Intelligence (AI) enabled software? What happens when the financial information is no longer readable by humans or stored in a way that could be compiled by humans in an understandable way? If the information is simply stored in an advanced database and storage ‘blob’ technologies mask the big picture unless millions of seemingly unrelated data bits are put together in the right sequence? By relying on technology so far out of our reach as humans, we put the accuracy and auditability of our information—whether medical records, company earnings reports, stock trades—at risk.
And what happens when the systems become so complicated, the data so vast, and the models and calculations so complex that humans can no longer get out a piece of paper and run the numbers for themselves? Would we need an AI built with the sole purpose of auditability, or the ability to check and verify the accuracy and completeness of the data? Even the ancient Roman poet Juvenal understood this paradox with his words “Quis custodiet ipsos custodes?”—who will watch the watchman?
This has me concerned. Humans so far out of the loop, coupled with incomprehensible technology, how could we possibly know if the information is correct? And more importantly, who is held accountable? Is it fair to hold human executives accountable? And what if the AI simply made a mistake?
Financial questions seem relatively simple in comparison to more complex and emotionally charged situations involving AI that results in harm to humans. What happens in the situation where AI robotic systems have an accident? Or a car decides to save its passenger at the expense of a crowd of pedestrians, or vice versa. And what will this do to the insurance industry who insures against these accidents?
There are no good answers that are readily apparent. Historically, as new technology has been introduced, the world has had to adapt. Jobs were lost, new jobs were created. As this great AI experiment meshes with the fabric of human civilization, there are things to think about and solutions to create for an endless stream of questions.
Comments