Law360 quotes Charlotte Hill on AI accountability for senior managers
Under the senior managers and certification regime, where does responsibility lie for regulated firms' governance of AI models?
As regulated firms increasingly deploy AI systems, industry commentators have called on the FCA to clarify where senior managers' accountability lies.
Given the risk of potential damage from AI system failures, gaps in the regulatory rulebook have been identified which might leave senior managers accountable should AI models under their remit go rogue. The senior managers' regime appears to be unclear on the implied responsibility for managers, for instance how AI failings fall within their oversight of systems and controls. It has been made clear that not understanding an AI model's reasoning would not be accepted as a defence.
Charlotte Hill, Partner in the Financial Services Regulation & Funds team, considers how the watchdog might implement accountability for AI, and shares her thoughts with Law360:
There is a strong case for clearer allocation of AI accountability under the senior managers and certification regime, particularly in larger firms."
"However, that is distinct from requiring one individual to personally understand every technical detail. This complexity could complex questions of accountability and liability, in the event of risk exposure.
Read the full article in Law360 here (subscription required).