Navigating the Layers of AI and Data Regulations in Companies: Insights from a Data Protection Officers Roundtable #NAMA

How can companies navigate data protection and AI deployment strategies all at the same time? A look at the challenges and solutions. The post Navigating the Layers of AI and Data Regulations in Companies: Insights from a Data Protection Officers Roundtable #NAMA appeared first on MEDIANAMA.

featured-image

Explainer Briefly Slides The rapid growth of AI has introduced a host of new challenges for organizations, particularly around data protection and regulatory compliance. Data Protection Officers now find themselves navigating an increasingly complex web of regulations across multiple jurisdictions, often in the face of uncertainty. In a highly competitive landscape, companies have to balance innovation, such as training AI models on vast datasets, with the need to protect privacy.

At the Data Protection Officer Roundtable during PrivacyNama 2024, experts explored the challenges of developing and deploying AI, offering best practices for overcoming these obstacles. Challenges in regulation Inconsistency in regulation Derek Ho, Assistant General Counsel, Mastercard, said that developers in industries and finance are often already heavily regulated when it comes to frameworks around risk management and IT outsourcing. On top of that, AI integration brings in the “additional complexities around data governance.



” Data protection officers often have to balance these regulations, in addition to having to ensure that external service providers apply those principles and requirements, as AI often necessitates hiring an external service provider. Another challenge companies face is the lack of consistency in regulation. “We’ve seen this in some countries where a privacy law defines personal data very differently from how a financial regulator would define it, or a healthcare regulator would define it.

And then you have different definitions of a basic construct of personal data or non-personal data,” Derek Ho said. Internationally, however, he said there is a lot more consistency. He gave examples of multilateral organizations like the Organization for Economic Co-operation and Development (OECD) which have created common principles for policymakers to follow .

He said that while regulations differ across countries, most regulatory proposals, frameworks, and guidelines, all try to land on a certain common set of principles like fairness, accountability, transparency, explainability, and human centricity. And there are going to be differences, but you’re always going to have some level of differences. This is not a unique problem.

You have competition laws which are different. Healthcare laws are different as well.” Uncertain regulatory environment Sriram SL, Data Protection Officer, Zomato, said that companies based in countries that do not have fixed regulations like India also face challenges.

He said that the general principle they follow for data processing is asking if its integration is necessary and seeing if the AI is causing harm, which he said is the fundamental basis for new regulations.“While I may not have a regulatory risk framework, I can internally always put in place a system so that high-risk [applications of AI] are subject to more scrutiny and therefore are subject to more checks and balances within the system before it [the platform] becomes a user-facing model,” he said. Challenges for deployers Sriram SL noted that companies like Zomato that deploy AI may have less control over the AI system, as opposed to its developers.

He noted that companies developing AI should consider its unpredictable emergent capabilities and avoid being overly aggressive in implementing new features. “I can’t just let an AI system loose within the organization, within the product, within my app, and then deal with the fall. There needs to be extensive testing and there needs to be a proportionality.

Does the benefit of the AI system really necessitate the risks we take in deploying?” he asked. Navigating changes in privacy regulations, retrospectively As more jurisdictions introduce regulations, more guardrails are imposed on AI models that train on personal data. Those who trained their models before these regulations are now in a difficult position.

For example, the FTC has suggested algorithmic disgorgement in certain cases. To address this issue, Srinidhi Srinivas, Partner, Ikigai Law suggested at the session on “Applying Data Protection Principles to AI,” retraining the model when it comes to input data or training data, implement data subject rights and deletion rights and creating some output filters. Idriss Kechhida, Global Data Protection Officer, OpenAI noted that companies in these cases can face litigation and regulatory action.

He advised “to always keep the past as a good example of a cautionary tale when these things happen. You can also make use of these things in a way, that can also help you improve in the future because you can have actual data points that you can talk to internally when you are doing these things. .

...

. when it comes to the future, try always getting better. It’s never too late, to apply a good culture within a company that can help you address this in advance.

” Derek Ho suggested that companies follow the US National Institute of Standards and Technology AI risk framework —govern, map, manage, and then measure—to stay ahead of regulatory expectations. Recommendations for organizations to inculcate best practices Mugjan Caba, Senior Compliance Officer, Truecaller, said, “[Organizations must] ideally have internal governance structure, we need to definitely team up with the broader organization. We were already doing that for the purposes of data protection compliance, but now the need is even bigger for the technical and product aspect of the AI that we see.

” Derek Ho said that a company’s sponsorship could have a large influence on its culture of privacy. “I would say to any company who wants to get better at operationalizing privacy, is, do you think that you have the right sponsorship internally? And an easy way to answer this [is to ask], do we think the CEO actually has a sense of the challenges and the opportunities that relate to privacy? Is there a forum in which these senior executives hear about privacy and can weigh in on certain critical decision-making? Is there a steering committee for privacy?” he said. He also pointed out that the compliance culture usually comes from the top of the company and it is important to inculcate that culture within the organization.

He said that while companies may want to incorporate AI to be competitive they must not be data-hungry and must keep consent at the center of their policy. “It is also the firm or the company’s responsibility to be upfront with the customer about what the data is being used for and not start doing processing on the backend without the customer. So it really has to be consent-driven.

But at the same time, we have to be mindful of data minimization. Don’t just start using processes or start deploying features which are overly reliant on personal data, which they were not before,” he said. Answering Data Subject Erasure Requests Sriram SL said companies must have a proactive approach to data subject rights.

“You [companies] will have to foster a system where we take into account from day one that the data subject whose data is being used for a particular product or a platform will continue to have his rights for the rights over the course of that data’s storage and retention period in the system,” he said. He also said that companies must keep track of their data ensuring that personal data does not leave the system and have a systemic approach to ensure that users have more control over their data. He said companies must not rush to capitalise on AI.

“You do an assessment...

and make sure that you’re only using it for a specific purpose, and you’re not just putting out personal data. And that, in turn, makes it easier for you to handle DSRs [Data Subject Requests] because while you have integrated a new technology on your platform, you’ve not really done something new or unconventional with the data that was ever consented to. So, you’re still staying true to the purpose to which the data was consented.

And therefore, it becomes easier to compile, because ultimately, you’re not doing something new, even if the nature of the request is new,” he said. Also Read:.