The GenAI Building Blocks

GenAI is here to stay: What it means for cyber security

GenAI is here to stay: What it means for cyber security
  • Article
  • 10 minute read
  • 22 Mar 2024

By Manuel Seiferth, Henning Kruse and Jordan Pötsch. Generative AI is on a fast pace towards copiloting us in our work. It helps with routine tasks, research and writing. The most used large language models (LLM) on the market compete for the best output quality, speed and resource efficiency. Retrieval-augmented generation (RAG) has opened the marketplace for individualization and has arguably eaten the stakes of some startups.

This way, businesses can use general GenAI models and augment them with their internal and proprietary data. At the same time, the data stays within a secure environment without the need for training it directly into the models. The market perspective shows: As the LLM output quality converges, differentiation comes from the data that the models have access to. Businesses need to find ways to securely feed the generative AI with their data to not fall behind.

Cybersecurity of AI

Looking at how AI enters the market in the form of APIs, tools, modules and applications that are being integrated into many of the big SaaS and PaaS offerings, it becomes evident that we need to extend already existing security governance, policies and processes to cover the AI aspects. GenAI does not require completely new structures and processes, but the existing artifacts should be extended by specific AI risks and controls while the established approach stays intact for AI use cases.

We see many new AI use cases coming up and being integrated into software and processes. Although we aim for the quick wins, it makes sense to systemize AI security considerations for the future. These use cases will unlikely remain the only ones, and with documented processes and checklists, you can reuse your considerations. Eventually, a management system extension for AI helps your security teams to support securely integrate business use cases and orchestrate company-wide implementations.

Extending security capabilities to include considerations around AI-specific risks and threats - PwC

The newly introduced concept of an AIMS (AI Management System) organizes how companies should manage AI, including security. In our perspective, AIMS does not mean to build a parallel, new structure, but to integrate with an existing ISMS (Information Security Management System).

Readiness & Strategy

In order to effectively manage the utilization of AI tools, several critical moves must be implemented. An AI security strategy that includes your design principles and management accountability is the first step. The strategy integrates into your overall AI business approach and provides guardrails as well as influences platform and development decisions. Furthermore, you can derive your required AI security capabilities and the respective structural organization. With that in mind, you can quickly discover your AI security readiness and develop a roadmap.

Cyber AI Governance

It’s essential to establish or enhance existing data governance and lifecycle processes. This involves identifying and categorizing sensitive data within the organization’s ecosystem, as well as implementing protocols to protect it throughout its lifecycle. To streamline this process, leveraging tooling for automated labeling and protection of sensitive data can be highly beneficial. By identifying and applying appropriate security measures to sensitive data, organizations can minimize the risk of data breaches and ensure compliance with regulatory requirements. If the AI is provided by a third party, solid contracts and third party security is required.

Implementing a requirement for the registration of large datasets used for AI training is crucial. This helps you maintain oversight of data usage and ensures that proper security measures are in place to protect critical information. Upskilling security teams on the specific risks associated with AI is essential. By embedding security experts into AI project teams, you can proactively identify and address security concerns throughout the development and deployment process. Developing acceptable use policies for AI tools helps to mitigate the risk of misuse.

Technical implementation

Besides security teams, educating users on the specific risks and threats posed by GenAI is essential. By raising awareness and providing training on best practices for secure development, teams can better understand and mitigate potential vulnerabilities. Furthermore, extending secure development processes to encompass GenAI is crucial. This involves enhancing threat modeling and risk assessments to account for the specifics of AI systems compared to conventional applications.

Deploying technology to mitigate new threats specific to GenAI, such as prompt engineering, is essential for proactive risk management. By leveraging advanced technologies and techniques, organizations can quickly identify and mitigate emerging threats before they escalate. Additionally, extending threat detection and response capabilities to include GenAI applications and infrastructure is imperative. This ensures that potential security incidents involving AI systems are promptly detected and addressed to minimize impact. Extending red teaming and security testing to cover GenAI applications is crucial for identifying and mitigating potential vulnerabilities. By subjecting AI systems to rigorous testing and simulated attacks, organizations can identify weaknesses and strengthen their security posture against evolving threats.

Monitoring

Organizations must actively monitor AI tools for potential threats and malicious activities. By implementing robust threat detection mechanisms, organizations can promptly identify and respond to any security incidents, thereby minimizing the impact on operations and protecting sensitive data from unauthorized access or exploitation. Detection can include the model in- and outputs, e.g. as part of a policy engine, the compute and cost utilization, human sample checks and conventional application security events. Also consider AI compute consumption and related cost.

AI for Cybersecurity

Generative AI offers significant efficiency advantages, allowing security team members to concentrate on high-impact tasks and enhancing their organization’s security posture. Its primary benefits lie in streamlining tasks within existing workflows of analysts and security teams, such as assisting in drafting incident reports and management reporting.

It’s important to recognize that AI will not replace human security teams for now; generative AI applications still require close human oversight and are more suited for augmentative and intelligence automation roles. A thoughtful approach, informed by an in-depth understanding of an organization’s processes, workflows, bottlenecks, and expertise, is crucial for effective integration and identifying where GenAI will provide value.

Enhancing the capacity to detect and respond effectively is essential in the current cybersecurity environment. Minimizing the response time can be achieved by adopting GenAI-powered security tools. These tools empower Security Analysts to respond swiftly to incidents, thus minimizing potential damages. Automation plays a crucial role in enhancing response efficiency. Developing GenAI-enabled automations enables the orchestration of rapid and effective responses to cyber incidents. By automating repetitive tasks and decision-making processes, security teams can focus on more complex threats. Another vital aspect is generating new detections. Leveraging GenAI, organizations can automatically generate new detections in query languages based on incident findings and threat intelligence. This proactive approach enhances the organization’s ability to detect and mitigate emerging threats before they escalate. Furthermore, it’s essential to identify priority recommendations amidst the plethora of incident reports. GenAI can be leveraged to distill key insights from these reports and provide tailored remediation recommendations. This ensures that security efforts are focused on addressing the most critical issues, thereby strengthening overall cybersecurity posture.

Other use cases include accelerating policy drafting, building internal knowledge bases and advisories for application owners and helping in reporting tasks. GenAI can aid in drafting red team and penetration testing reports, as well as writing recommendations.

How PwC can help you

We as PwC are following these rapid trends and help our clients with the adoption of GenAI across business units securely. We are committed to build on existing processes and controls and harmonizing the broad technical guidance that is given by the various standards and frameworks that have been published over the past months.

The secure GenAI Adoption Framework from PwC

All adoptions have a start. PwC can help you assess your current security organization’s readiness for AI, identify potential shortcomings in responsibilities, processes and organizational integration, and prioritize actions to mitigate them. PwC can also help you benchmark your AI security performance against industry standards and best practices, and provide recommendations for improvement.

PwC can help you design and implement a comprehensive AI cyber governance framework that covers all aspects of AI development and deployment, such as data quality, model validation, algorithm transparency, ethical principles, and compliance. PwC can also help you establish clear roles and responsibilities, policies and procedures, controls and monitoring, and reporting and escalation mechanisms for AI security and risk management.

PwC can help you ensure that your data is secure, accurate, and compliant throughout the AI lifecycle, from collection to processing to storage to sharing. We prevent data leaks through AI access and help AI tools to base their output on the most recent and correct data. PwC can help you implement data governance, anonymization, access control, audit trails, and backup and recovery solutions to protect your data from unauthorized or malicious use. PwC can also help you comply with data privacy and protection regulations, such as GDPR and CCPA, and manage data consent and ownership issues.

PwC can help you implement AI solutions that are secure, reliable, and trustworthy for your specific business needs and objectives. For your cyber security teams, we closely work with our technology partners to enable our clients detection and response capabilities, as well as providing our client’s collective best practice use cases for daily security routines. PwC can help you select the most suitable AI techniques and tools, integrate them with your existing systems and processes, and test and validate them for functionality and quality. PwC can also help you monitor and evaluate the performance and impact of your AI solutions, and provide ongoing support and maintenance.

Secure and accelerate your Microsoft 365 Copilot adoption

In the rapidly evolving digital landscape, the adoption of Microsoft 365 Copilot is a key focus for organisations aiming to enhance productivity and collaboration. We can help you gain confidence that you have a clear understanding of the safeguards required to securely adopt Microsoft 365 Copilot based on our technology enabled rapid cyber security readiness and risk exposure assessment.

Find out more

Follow us

Contact us

Franz Steuer

Franz Steuer

Partner, PwC Germany

Andreas Hufenstuhl

Andreas Hufenstuhl

Partner, Data & AI Use Cases, PwC Germany

Tel: +49 1516 4324486

Christine Flath

Christine Flath

Partner, PwC Germany

Tel: +49 171 5666490

Manuel Seiferth

Manuel Seiferth

Partner, Cyber Security & Privacy Strategy, Risk and Compliance, PwC Germany

Tel: +49 160 536-3800

Henning Kruse

Henning Kruse

Senior Manager, PwC Germany

Jordan Pötsch

Jordan Pötsch

Senior Associate, PwC Germany

Tel: +49 211 9814086

Hide