The Biggest AI Security Risks Aren't What You Think
The Biggest AI Security Risks Aren't What You Think
New research reveals the concerns holding businesses back from GenAI, and why the real risk might be doing nothing at all.
If you've been following the AI conversation, you've probably heard plenty about hallucinations, bias, and inaccuracy. Those are real concerns. But according to the latest global research, they're no longer the ones keeping decision-makers up at night.
The risks now rising to the top of the list are privacy, compliance, and regulatory exposure.
For businesses across Australia and New Zealand, where data sovereignty and privacy regulation carry real weight, this shift matters more than many realise.
The Risk Landscape Has Changed
Recent research from AWS and BCG surveyed more than 1,200 technology decision-makers globally as part of their GenAI Customer Research programme. The findings show a clear shift in how organisations think about AI risk.
Just two years ago, the dominant concerns were largely technical: model safety, financial risk, and inaccuracy. Today the picture looks different. Personal privacy and legal or compliance risk now sit at the top, each cited by 27% of respondents. Model safety (24%), inaccuracy (24%), and financial risk (23%) have all moved down the list.
What's particularly notable is how many of the fastest-rising concerns are relatively new. Personal privacy, explainability, IP infringement, and workforce displacement were not even tracked two years ago. Today they sit among the top concerns organisations report.
This is not a story about AI becoming riskier. It is a story about organisations becoming more mature in how they assess risk. As adoption moves from experimentation to production, the questions shift from “does this work?” to “can we trust it at scale?”
It's Not the Technology. It's Trust.
When the research looks deeper into specific security concerns, the pattern holds.
The top concern cited by respondents is model and knowledge integrity. This refers to the risk that AI training data, knowledge sources, or embeddings could be tampered with or corrupted. Seventeen percent of respondents ranked it as their primary concern.
Close behind are insider misuse or loss of control over outputs (14%), data leakage (14%), and regulatory compliance risk (13%). AI-enabled fraud or malware rounds out the top five at 12%.
What stands out here is what does not rank highly. Traditional infrastructure concerns such as vendor trust (6%) and monitoring tools (7%) appear near the bottom of the list. Most organisations are reasonably confident in their technical foundations. What concerns them is what happens on top of those foundations: how models behave, who controls the outputs, and whether the organisation can meet its regulatory obligations.
For organisations operating in regulated sectors such as healthcare, financial services, insurance, and government, these are not abstract concerns. They are the exact questions compliance teams, boards, and regulators are already asking. Those questions only become more pressing as AI moves from generating content to taking autonomous action.
Agentic AI Raises the Stakes
This is where the agentic AI conversation and the security conversation intersect.
When AI systems are simply generating text or summarising documents, the consequences of an error are usually limited. A human reviews the output, corrects it, and moves on. But when AI systems begin taking action on behalf of the business, such as processing transactions, routing requests, or interacting with customers autonomously, the impact of a security failure increases significantly.
The research highlights two areas where agentic systems raise the stakes. The first is observability. Autonomous systems require stronger monitoring because you cannot rely on a human reviewing every output before it reaches a customer or triggers a downstream process. The second is indemnity and regulatory exposure. When AI is making decisions rather than simply offering recommendations, accountability becomes more complex.
This does not mean organisations should avoid agentic AI. It means they need the right foundations in place. Clear governance frameworks, strong data security, robust monitoring and logging, and partners who understand both the technology and the regulatory landscape.
What ANZ Businesses Should Be Thinking About
For organisations in Australia and New Zealand, the regulatory environment adds another layer of complexity. The Privacy Act in New Zealand, the Australian Privacy Principles, and emerging AI guidance from regulators all create obligations that need to be built into AI systems from the beginning, not added later.
A practical starting point is data governance. Before deciding which model to use or which agent framework to deploy, organisations should ensure their data is properly classified, access-controlled, and auditable. The research confirms that data leakage and insider misuse are among the top concerns. Strong governance directly addresses both.
Observability should also be designed into the architecture from the start. Monitoring cannot be an afterthought. As AI systems become more autonomous, organisations need clear visibility into what those systems are doing, what data they access, and how decisions are being made. This is particularly important for compliance reporting and regulatory oversight.
It is also critical to understand your regulatory obligations early. Organisations operating in healthcare, financial services, or government already face strict requirements around data handling and accountability. Building systems that can demonstrate compliance from day one will reduce risk significantly as AI adoption grows.
Finally, partner selection matters. The research shows that only about 5% of respondents rank vendor trust as a top concern, suggesting many organisations feel confident about their chosen partners. But that confidence must be backed by real capability. Look for partners with accredited expertise, experience in regulated environments, and the ability to integrate security throughout the entire solution stack.
The Real Risk Is Standing Still
It is easy to see all of this and conclude that the safest option is to wait. Let the technology mature. Let the regulations settle. Let others go first.
But the research suggests the opposite.
The organisations leading in AI adoption are also the ones with the most mature understanding of AI risk. They are not ignoring risk. They are managing it deliberately, building governance frameworks and technical foundations that allow them to move forward with confidence.
The real risk is not experimenting with AI responsibly. The real risk is allowing competitors to build these capabilities while you remain on the sidelines.
At Easycoder, we believe trust sits at the core of every system we build. Technology should never feel like a blocker, and security should never be an afterthought. It is something you design into the foundation from day one.
If you're navigating the intersection of AI, security, and compliance in your organisation, we'd love to talk. Our team works with businesses across Australia and New Zealand to build AI systems on foundations they can trust.



