Artificial intelligence (AI) is everywhere, and it’s transforming the way we live and work. It’s rapidly revolutionizing industries with its potential to solve complex problems, enhance decision-making, and improve efficiency. As such, the integration of AI into many products and services offered by third-party vendors to organizations is also becoming more widespread, many times without the organization’s awareness.
Understanding the Risks of Third-Party AI
AI is an impressive technology, but it also comes with significant risks, especially when it’s integrated into vendor products or services.
Let’s examine two of the most common risks of third-party AI usage:
Data security and privacy – AI systems need a significant amount of data to function efficiently. Therefore, it’s essential to protect the data from theft and misuse. AI systems may access different types of data such as:
Customer/consumer information and personal identifiable information (PII): This includes addresses, driver's licenses, passports, family members, financial or health information, social media or web use data, shopping behaviors, and more.
Sensitive company data: This includes employee records, financial information, customer data, legal and compliance information, supply chain inventory, logistics, forecasting, and all types of intellectual property.
Compliance and legal – It’s vital to understand there are significant legal and compliance concerns related to the use of data and other assets when they’re accessed and processed with AI. The use of AI in data processing may be subject to numerous laws and regulations, including:
Health Insurance Portability and Accountability Act (HIPAA)
Children's Online Privacy Protection Act (COPPA)
Gramm-Leach-Bliley Act (GLBA)
Electronic Communications Privacy Act (ECPA)
California Consumer Privacy Act (CCPA)
Numerous state privacy laws
Additionally, there’s a risk of violating permissible use requirements preventing out of context, unrelated, or unfair use of data.
While these are two significant risks associated with AI, they’re not the only ones. Ethical risks, including bias and fairness, require attention, as do algorithm transparency, financial risk, and intellectual property risks. As AI technology becomes more widespread, the risks associated with it are also expanding.
Identifying AI Risk in Your Third-Party Vendor Portfolio
You likely have third parties who are currently using AI in their products and services. If you haven't done so already, it’s important to identify these third-party vendors and assess the specific AI risks they pose to your organization and customers.
It's crucial to update your third-party risk management (TPRM) framework and tools to include AI risks. However, many TPRM programs haven’t incorporated AI risks, and it’s important to address this issue now.
A practical, two-prong approach can ensure you’re identifying existing third-party AI risks and building the infrastructure to properly assess and mitigate them:
Getting started – Develop a short questionnaire to help identify the products and services utilizing AI. Here are three suggested questions that can provide a wealth of information:
Has AI technology been used in the research, development, or production of any of your products or services? It's worth noting that different types of AI carry different levels of risk. For instance, a vendor might use image recognition for research purposes, generative AI to create a system that interacts with customers directly, such as a chatbot, or machine learning to identify fraud across a series of transactions.
Are there any plans to incorporate AI in your products, services, or operations? It's crucial to consider that your third-party vendor's adoption of AI can significantly impact your organization, even if they aren't currently using it today.
Do you have any policies on employee use of AI? Inquire whether your third-party vendor has any limitations or prohibitions regarding the workers' usage of AI for work-related assignments. With the increasing popularity of generative AI systems such as ChatGPT, it’s essential to understand how your vendor is supervising the utilization of such technologies among their employees, especially if the AI-based service uses the data input to train its model. Begin with your critical and high-risk vendors and work your way down the list. This simple approach can help you determine where additional due diligence and risk reviews are needed.
Updating your TPRM framework – It's not enough to identify third-party vendors with AI; you’ll also need proper tools and processes to ensure they have adequate AI risk management practices and controls, and that risks are well-managed and monitored throughout the contract. This means incorporating AI risk across your entire TPRM framework. Here are key areas to review and update:
Incorporate AI-related questions in the inherent risk assessment
Update vendor questionnaires to include AI-related questions
Identify the types of due diligence documentation you’ll request as evidence of AI controls
Review and update standard contract language to address AI risks
Consider how AI will be factored into third-party performance monitoring and management
Consider how AI will be factored into third-party risk monitoring
Update governance documentation
Evaluate stakeholder education and collaboration
Note: Don’t overlook this important consideration! It’s crucial to update your TPRM processes and tools with a sense of urgency. However, it should be noted that AI isn’t yet as well understood as other established risk domains. Even experienced TPRM professionals may face unique challenges when dealing with AI, which could lead to delays, rework or, in the worst case, ineffective risk identification, assessment, and management.
To help prevent these AI challenges and issues, your organization should find and work with a qualified AI subject matter expert who can guide you through the process of updating the TPRM framework. This expert can help determine the right questions to ask on a vendor risk questionnaire, identify the appropriate due diligence documents, and provide ongoing support for vendor risk reviews. If you don't have access to this expertise within your organization, you may need to engage external resources or consultants.
By taking this simple approach, your organization can begin to identify vendor AI usage within your organization and start taking steps to mitigate the risks. This will leave your organization in a safer, more prepared position.
Comments