In our new research paper, we'll discuss how privacy professionals, and their organizations, can take on AI governance — and what will happen if they don’t.
Key findings include:
Despite being in a relatively new field themselves, privacy professionals are being asked to take on the new challenges posed by AI. This is as much an opportunity as a challenge.
AI governance is value generating, not only making sure that initiatives are compliant with regulations and preventing risks, but also enabling more effective use of AI systems. Benefits of better governance include reduce system failure rates and downtime, increased trust from end users, and a signal of quality to investors. The ROI from value-generating governance therefore provides an incentive for organizations to invest in the required technical expertise.
There’s no getting around the technical barriers to thriving in this new field. All professionals will have to gain domain-specific knowledge in AI to be effective. The key is not fully understanding the technology – a skill even technologists often lack - but knowing how much information is sufficient to take action on. Specifically:
Legal compliance professionals will have to understand risks from non-personal data without strict regulatory standards as a guide. They will need to define and defend their organization’s unique AI Governance guidelines, and be nimble enough to adapt to coming regulations.
Technical professionals will also have to understand data handling processes for non-personal data and how models process and output this data. This includes strong knowledge of model interpretability techniques such as LIME / SHAP.
The biggest skills gap is knowledge of underlying technologies, paving the way for providers with specialist knowledge on AI systems and how to governance them to take center stage. In the interim, hiring specialist expertise could help to bolster the 59% of technical privacy teams that currently describe themselves as understaffed.
Upskilling can be supported by strong organizational governance processes, where more technically-knowledgeable stakeholders translate the implications of handling AI models across to those in charge of other aspects of governance, such as the legal and policy teams.
Failure to adopt these best practices and invest in AI knowledge and governance can result in regulatory fines, consumer mistrust, and operational disruptions. Organizations also risk reputational damage, legal liabilities, and losing out on top talent and investors who value well-governed AI systems.