P&C Technosavvy the April 2025 issue

AI Accomplice

Q&A with Ashwin Kashyap, Chief Product Officer, CyberCube
By Michael Fitzpatrick Posted on April 1, 2025

Ashwin Kashyap, co-founder of the cyber risk analytics and modeling firm CyberCube, discusses how AI has enhanced attacks, including phishing, deepfakes, and social engineering, and cautions that autonomous AI agents may further raise the threat level.

Q
How have cybercriminals been using AI to support their attacks?
A

I’ll give a nuanced answer to this across multiple dimensions. AI has become a force multiplier for cybercriminals, making their attacks more effective and scalable. We’ve seen evidence of this in the real world that I’ll walk through.

No. 1, AI-enhanced phishing attacks: AI enables the generation of highly convincing phishing emails, eliminating traditional red flags such as grammatical errors and poor formatting. We’ve seen this being used to increase the effectiveness of a phishing campaign.

Based on sources that we’ve reviewed, AI-generated phishing emails have a demonstrated 78% higher click-through rate compared to manually written scams.

What does this mean to organizations? Organizations must invest in AI-driven threat detection and continuous employee security awareness. So, if you’re aware of whether an email is AI-generated or not, you can make better decisions on what you do with it. Example No. 2 is around deepfakes. There has been evidence of deepfake-based financial fraud. Executives have been impersonated. Even nation-state leaders have been impersonated recently. All of this says that we need to authenticate the identity of the individual that is presenting some information to you. So, robust multifactor authentication and verification protocols become very important for financial transactions as we think about the future.

The last example is on AI-powered social engineering attacks. Criminals use AI to analyze publicly available data for individuals. And they can craft highly successful and targeted spear-phishing campaigns. You can be offered fake jobs, and you can be provided misleading information about family members. All of these are examples of social engineering. If you look at all of these different themes—phishing attacks, financial fraud, social engineering—the common theme is AI basically provides a boost to the threat actors in making these easier to carry out and increases the probability of success.

Q
You’ve cited autonomous AI agents as a growing threat. What is an autonomous AI agent?
A
Autonomous AI agents are self-learning cyber tools that can conduct reconnaissance, identify vulnerabilities, and deploy malware without human intervention. All of the things that a trained cybercriminal can do, these autonomous agents can do. Unlike traditional cyber threats, these agents adapt in real time, lowering the cost and effort required for attackers to execute complex operations. What does this mean to the industry at large? Security strategies must shift from being static defenses to dynamic and AI-powered threat detection and response systems.
Q
How do the autonomous AI agents support scaling of cyberattacks?
A
With no human in the loop, they can execute attacks at unprecedented speed and scale. It’s really the scale element that is quite concerning for the insurance industry, as it cares more about one campaign that can impact multiple companies in their portfolio rather than one large breach. The ability for these autonomous AI agents to scale cyberattacks is a particularly worrisome development for the insurance market, and threat actors can automate and optimize these attacks, from identifying targets to breaching defenses and exfiltrating data using these autonomous agents.
Q
I picture the cyber singularity with AI agents assembling AI gangs, but perhaps that’s not in the future yet.
A
It could be. AI controlling AI is certainly one of the concerns around AI security at large. So, you do want some controls in place on where AI gets utilized. So far, I’ve really talked about human threat actors deploying AI to make their attacks more effective. But you could envision a future state of the world where there are AI agents controlling AI agents.
Q
Similar to large language models, can autonomous AI agents be trained on prior exploits?
A

Just like large language models can improve through training on large data sets of relevant value, autonomous AI agents can refine their attack strategies by learning from past cyber incidents in terms of what has worked and what has not worked. In the simplest possible terms, you can think of an AI agent as a human brain. As a human being, you try out certain things and you utilize things that work based on past experience. That’s where experience matters.

AI agents are no different except that they start with a large corpus of useful data that helps them optimize their attack strategies. This allows them to evade traditional security controls and exploit vulnerabilities faster than ever before. As a result, organizations must continue to update security postures and leverage AI-driven threat intelligence to stay ahead of these evolving attack methodologies. This idea of being adaptive is what I would call out as one of the characteristic traits of AI agents.

Q
Recently DeepSeek roiled the AI industry. Does the availability of cheaper, easier AI help cybercriminals?
A
AI acts as an aid to an enterprising cybercriminal, so as AI becomes more accessible and cost-effective, even lower-skilled cybercriminals can launch highly sophisticated attacks and scale these attacks. The business of cybercrime is no different from the business of technology: it is to basically make money with the lowest expense possible. With the availability of AI-generated malware, automated phishing campaigns, and deepfake technology tools, this can lead to a democratization of cybercrime. While we believe that the proportion of cybercriminals relative to the broader population will stay somewhat uniform, we believe that the cybercriminals themselves will be more sophisticated and empowered through the use of these AI technologies.
Q
How is CyberCube using AI to strengthen security for its clients?
A

CyberCube utilizes AI to enhance cyber risk assessment and mitigation strategies for businesses via insurers. The customers that we serve are the broader insurance market—brokers, carriers, reinsurers. We are using AI-driven risk quantification to provide predictive analytics to the insurance market where we identify emerging threats and flag the potential financial impact associated with these threats. We also provide portfolio risk modeling for insurers where we help carriers assess accumulation risk, and we help them optimize their underwriting strategies.

Finally, we provide automated cyber risk assessments, which are AI-powered evaluations that provide insights into a company’s security posture. The implication of all of this is that insurers benefit from understanding the drivers of cyber risk, and they utilize this information for underwriting decisions, thereby improving the cyber hygiene of businesses that they provide coverage for. So that’s really the value chain that we work around.

Q
Have new defense tools improved cyber hygiene?
A

The short answer is yes. We have seen insurers prioritize certain questions in their underwriting questionnaires, such as asking whether multifactor authentication is deployed at scale, and the reason why insurers are doing this is because some of these factors are directly correlated to the frequency of a cyber event. If you have multifactor authentication in place, you’re less likely to have a crippling cyberattack as a business. This in turn has improved the security standard for an average business.

If you think about a small business, in the prior world without cyber insurance, they looked at cybersecurity as an afterthought. But now businesses are forced to consider cybersecurity as a business priority rather than an afterthought. Insurers have been instrumental in driving the understanding of cyber risk and making those factors important in how businesses invest in cybersecurity tools.

Q
What role do brokers play in advising on cyber threats, particularly to small- and medium-size businesses?
A

Cyber brokers play a pivotal role in guiding SMEs [small and medium enterprises] through the evolving cyber threat landscape in a variety of different ways. No. 1, providing cyber risk education and dispelling the misconception that SMEs are not prime cyber targets. Cybercriminals are opportunistic. They will go after any vulnerable target that they come across.

No. 2, offering tailored risk assessments using AI-powered tools, such as CyberCube, to assess the security hygiene of a company and recommend appropriate coverage.

No. 3, simplifying complex cyber policies, like translating technical jargon into actionable business insights for the buyer of insurance and advocacy during the claims process. Whenever SMEs are attacked, they receive benefits from policies, post-incident, and brokers make sure that SMEs are supported when they work with them as clients.

Finally, facilitating access to security resources. Brokers are increasingly aware of cybersecurity solutions that can help solve business problems. Unlike large corporations that have a CISO [chief information security officer] and a security team, SMEs often lack these capabilities, and as a result, brokers can act as advisors to recommend cybersecurity solutions, to improve cyber hygiene to get better terms and better rates with carriers. The implications for this from a business perspective is that SMEs should proactively engage with cyber insurance brokers to build resilience, to secure comprehensive, cost-effective cyber insurance coverage.

Insurtech Sustainability

The fourth quarter of 2024 marked both a steep reduction in insurtech funding and a pivot toward startups focusing on building more sustainable businesses, according to Gallagher Re’s latest Global InsurTech Report.

Global insurtech funding declined 5.6% year over year in 2024 to a sevenyear low of $4.25 billion as a full-year drop in property and casualty insurtech funding was only partially offset by a sharp rise in life and health funding, Gallagher Re reported. In contrast, fourth-quarter funding dropped by about half to $688 million from $1.38 billion in the third quarter of 2024.

The global number of deals in 2024 fell to a six-year low of 344, with fewer venture investors active. Insurers and reinsurers, however, continue to show strong interest, particularly in AI use cases. AI-focused firms accounted for 42% of deals in the fourth quarter and nearly 35% for the full year. Claims-focused insurtechs accounted for half of the 78 fourth-quarter deals, further illustrating the industry interest.

Amid the funding decline, the insurtech industry continues to mature overall, with a stronger focus on profitability and business sustainability rather than prioritizing rapid growth, following a round of layoffs and asset sales over the prior two years, Gallagher Re said.

Tech Road Sign: Theft Ahead

For truckers, finding the quickest route isn’t enough. As cargo theft soars, they also need to find the safest route. To that end, Verisk is launching a data-driven scoring system to help shippers and freight companies avoid routes that carry a higher risk of theft.

Verisk’s new CargoNet RouteScore API uses a proprietary algorithm to assess the likelihood of crime along any driving route in the United States and Canada, with scores ranging from a low of 1 to a high of 100. The score is based on factors including cargo type and value, route length, and locations of origin and destination. It also includes the day of the week and the history of thefts at truck stops.

Faced with a high score that indicates a high probability of theft, companies can take additional security measures such as using tracking devices, driver teams, and escorts or securing parking spots ahead of time. Shippers can also choose carriers that provide stricter security measures.

Security is a growing problem for truckers, and location matters. Cargo theft incidents jumped 27% (to 3,625) from 2023 to 2024 across the United States and Canada, Verisk CargoNet reports. California and Texas showed the highest increases in thefts, up 33% and 39%, respectively. Among the most affected counties, Dallas County, Texas, endured a 78% surge and Los Angeles County, California, a 50% rise. The estimated average value per theft rose to $202,364 from $187,895 on a year-over-year basis, Verisk reported.

The hottest targets for criminals in 2024 shifted to copper products, consumer electronics, and cryptocurrency mining hardware from cargo such as engine oils, solar energy products, and energy drinks. While criminals continue to adapt, traditional burglaries and full trailer theft remain at high levels, particularly in major metropolitan areas.

Michael Fitzpatrick Technology Editor Read More

More in P&C