Security and privacy in the age of AI
The benefits of generative AI are quickly coming into focus, but the potential risks have given many decision-makers pause. According to IBM Institute for Business Value, 47% of executives have expressed concern about the security and privacy risks inherent in generative AI solutions.[1]
For insurers, who work with sensitive customer data every day, implementing generative AI tools could expose them to security issues and increased regulatory scrutiny. Companies need to understand the threat that AI could pose if they hope to maximize the benefits and mitigate the risks.
Navigating the potential risks of AI
Adjusters rely on certain types of personal information to process claims quickly and effectively. This makes training AI solutions using this data essential to gain the full benefits. Unfortunately, AI solutions aren’t reliably able to recognize and flag sensitive information in responses. If adjusters aren’t supervising generated content thoroughly, they run the risk of privacy leakage when leveraging AI solutions during claims.
For example, an adjuster might ask an AI platform to generate a policy report that includes some personally identifiable information. However, since AI is unable to discern if the right information is going into the right report, it may incorrectly input one customer’s sensitive information into the wrong report leading to an inadvertent breach of privacy.
In addition to the increased risk of user error, the risk of cyber threats is also very real. Generative AI tools have equipped bad actors with new ways to infiltrate databases and manipulate proprietary and personal data. To stop cyber criminals from taking advantage of customer data or manipulating outputs, strong cybersecurity strategies have to take priority.
Fortunately, there are ways that organizations can mitigate the mishandling of customer data and cyber threats.
First, insurers should establish strong training programs aimed at creating blanket awareness of generative AI processes across your workforce. It is important that adjusters are trained to preempt privacy violations rather than just respond to errors as they happen.
Second, ensure you are working with secure data from the start. Eager to test AI tools, employees might try incorporating third-party AI tools into their personal workflows. While this is fine for small tasks, prompts that require sensitive information should be avoided.
Additionally, organizations should consider how they are integrating AI solutions. By housing AI models on internal servers, companies can safeguard themselves from malicious external actors and ensure their training data stays in a closed loop.
The push to adopt AI solutions quickly is understandable. Companies are feeling the pressure to implement AI tools quickly to keep pace with competition and leverage the benefits. However, a more cautious approach could give organizations the time they need to integrate these tools responsibly.
The Crawford approach to AI safety
At Crawford, we believe in taking a measured approach to AI adoption. Organizations should take their time to first assess the state of their data security and privacy protections. Introducing AI into a troubled system will only exacerbate existing issues.
Crawford is committed to maintaining the highest standards of security and privacy in our solutions. We are optimistic about the future of AI, but it is important to approach new technologies with caution. As we explore the benefits and risks of generative AI, we will continue to put people first by strengthening the trust we have with our customers.