The exploding popularity of AI and its proliferation within the media has led to a rush to integrate this incredibly powerful technology into all sorts of different applications. What remains unclear though is the potential security and reputational ramifications that could result. LRQA Nettitude have tasked a group of our highly skilled security consultants with a passion for AI to develop an assurance line that can offer some insight, as well as identifying ways to implement AI in our own delivery methods and products.

With little in the way of regulation and standard security methodologies in the space, almost daily reports highlighting vulnerabilities or logic flaws that lead to less than ideal responses. This has included AI programs revealing sensitive information, being taken advantage of by malicious users to import malware into code output, or as some university students found out at their cost, taking credit for work it did not complete.

As we begin to research and develop our own security testing methodologies in line with rapidly changing security recommendations and use cases, LRQA Nettitude will use this space to dive deeper into the some of the security issues our customers are likely to face. In addition to the topics below that you can expect to see reviewed and discussed in the forms of blog posts or webinars, LRQA Nettitude would also like to extend an open invitation for feedback and collaboration. If you possess specific security concerns that you would like our team of researchers to investigate, we encourage you to reach out to us at labs@nettitude.com.

Current Regulations

Initial investigation shows the challenges that organisations will face in regulating the use of AI. There are currently conflicting or uncoordinated requirements from regulators which creates unnecessary burdens and that regulatory gaps may leave risks unmitigated, harming public trust and slowing AI adoption. As an example, the UK have several potential pieces of legislation that may cover AI in some form or another:

  • Discriminatory outcomes are covered by the Equality Act 2010
  • Product safety laws
  • Consumer rights law
  • Financial services regulation

LRQA Nettitude plan to identify and review relevant legislation and potential gaps that may affect the various industries that we support.

Future Regulations

Amongst the numerous challenges facing regulators, LRQA Nettitude anticipate that the initial focus will revolve around:

  • Accountability: Determine who is accountable for compliance with existing regulation and the principles. In the initial stages of implementation, regulators might provide guidance on how to demonstrate accountability.
  • Guidance: Guidance will be required on governance mechanisms including, potentially, activities in scope of appropriate risk management and governance processes (including reporting duties).
  • Technical Standards: Consider how available technical standards addressing AI governance, risk management, transparency and other issues can support responsible behaviour and maintain accountability within an organisation (for example, ISO/IEC 23894, ISO/IEC 42001, ISO/IEC TS 6254, ISO/IEC 5469 , ISO/IEC 25059*).

Just recently, the UK government has been setting out its strategic vision to make the UK at the forefront of AI technology. This has been echoed by the news that OpenAI have announced that their first international office outside the US is to be opened in London. “We see this expansion as an opportunity to attract world-class talent and drive innovation in AGI development and policy,” adds Sam Altman, CEO of OpenAI. “We’re excited about what the future holds and to see the contributions our London office will make towards building and deploying safe AI.”

As more information becomes available, LRQA Nettitude consultants will dive deeper into the details in order to bring the most relevant updates to our customers.

Data Privacy

Data privacy is a crucial concern in AI applications, as they often deal with large amounts of personal and sensitive information. Safeguarding data privacy involves implementing measures such as:

  • Anonymization and pseudonymization: Removing or encrypting personally identifiable information (PII) from datasets to prevent the identification of individuals.
  • Data minimization: Collecting and storing only the necessary data required for the AI system’s intended purpose, reducing the risk of unauthorized access or misuse.
  • Secure data transmission and storage: Utilizing encryption and secure protocols when transmitting and storing data to protect it from unauthorized access or interception.
  • Access controls and user permissions: Implementing role-based access controls and restricting data access to authorized personnel only.

The above requirements are something LRQA Nettitude look for in existing engagements, the difference being is that it’s normally obvious where the data is being stored and how it is being transmitted. This is not always the case when implementing third party AI technology and is something that LRQA Nettitude is really keen to review. How is the data you’re inputting into these models being transmitted? How is it being stored, do any 3rd parties have any access to your data? Could other users of the same AI model query it to expose your sensitive data? In the initial development of our AI services, we envisage this as being one of our key areas of focus.

User Awareness Training

Another area of initial focus, and one of the first services we plan on delivering is user awareness training. This plays a vital role in ensuring the responsible and safe use of AI technology and will be the first step to ensuring your sensitive data isn’t inadvertently ingested to an AI model. Some key aspects to cover in such training include:

  • Data handling best practices: Educating users on how to handle sensitive data, emphasising the importance of proper data storage, encryption, and secure transmission practices.
  • Phishing and social engineering awareness: Raising awareness about common attack vectors like phishing emails, malicious links, or social engineering attempts that can lead to unauthorized access to data or system compromise.
  • Understanding AI biases: Teaching users about the potential biases that can exist in AI algorithms and how they can impact decision-making processes. Encouraging critical thinking and providing guidance on how to address biases and how this could tie into regulatory frameworks.
  • Responsible AI usage: Ensuring your AI responses are fact checked and vetted. When being used in technological applications such as code review or code creation, are the libraries or commands being used safe? Or could they import vulnerabilities into your products?

Industry Frameworks

Security organisations within the industry are rapidly putting out new recommendations and working with industry experts to create provisional frameworks. LRQA Nettitude will initially focus on the following:

  • NCSC principles for the security of machine learning – The National Cyber Security Centre have produced numerous principles intended to assist anyone deploying operating systems with a machine learning component. The principles aren’t a specific framework, but provide context and structure to assist in making educated decisions when assessing risk and specific threats to a system.
  • NIST – The National Institute of Standards and Technology released the Artificial Intelligence Risk Management Framework earlier this year which aims to help organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.
  • Mitre – MITRE ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems), is a knowledge base of adversary tactics, techniques, and case studies for machine learning systems based on real-world observations, demonstrations from ML red teams and security groups, and the state of the possible from academic research
  • OWASP Top 10 for Large Language Model Applications – OWASP Machine Learning Top 10 is a list of vulnerabilities that could pose a huge risk to ML models if present. The list was introduced with the goal of educating developers, and organizations about the potential threats that may arise in ML.

Research and Whitepapers

Research and whitepapers play a significant role in advancing the field of AI and keeping up with the latest developments. LRQA Nettitude have tasked our researchers to produce a whitepaper that will offer some insight into the risks when implementing AI models in the business. Watch this space for future updates!

Regular News and Updates

Staying informed about the latest news and updates in the AI industry is crucial for understanding emerging trends, breakthroughs, and regulatory developments. LRQA Nettitude plan on providing regular news updates and blogs dedicated to the goings on in the world of security related to AI to help keep our customers up to date on the rapidly changing regulatory frameworks and active exploits.

AI Vulnerabilities

This is perhaps the area our consultants are most eager to explore, there are numerous attack paths that could lead to data exposure within an AI model. It looks as though regulation and design is sometimes outpaced by the ingenuity of threat actors. LRQA Nettitude will be looking to take a proactive approach rather than a reactive one and dedicate time to identifying issues before they become exploitable.

  • Poisoning – Should an attacker gain access to or be able to influence the training dataset. They can then poison the data by altering entries or injecting the training dataset with tampered data. And by doing so, they can achieve two things: lower the overall accuracy of the model or add adversarial patters to generate predictable outcomes.
  • Back Doors – A back door can lead on from poisoning and is a technique that implants secret behaviours into trained ML models by, for example, implementing hard-coded functions to certain parts of the model to manipulate the output.
  • Reverse Engineering – Although less likely as an attacker would first need access to the model itself, reverse engineering a model could assist an attacker in developing further, and more targeted exploits in order to compromise the model. Additionally, is some cases it would be possible to extract sensitive training data from the model file
  • Hallucinations – This is particularly inventive and involves the creation of deceptive URLs, references, or complete code libraries and functions that do not actually exist. When the model calls upon them, they inadvertently link output to attacker controlled resources.
  • Injection attacks: Preventing the injection of malicious code or commands into the AI system, which could lead to unauthorized access or manipulation of data.
  • Inadequate authentication and access controls: Ensuring proper authentication mechanisms are in place to verify the identity of users and restricting access to sensitive functions or data based on user roles and permissions.
  • Insecure data storage: Protecting sensitive data stored by the AI system by implementing encryption, access controls, and secure storage practices.
  • Insufficient input validation: Validating and sanitizing user inputs to prevent malicious inputs or code injections that could exploit vulnerabilities in the AI system.

Conclusion

The AI movement isn’t in the future, it’s here now. LRQA Nettitude would be doing a disservice to ourselves and our clients if we weren’t prioritising the potential risks and regulatory challenges associated with it. This space will be a regular stream of informative content aimed at answering those questions that are emerging, and those that haven’t been considered yet.