Cloud Security Office Hours Banner

Friday, November 29, 2024 — Meeting Recap

Addressing AI Risks in Security Reports

— Addressing AI Risks in Security Reports

Quick recap. The team discussed the importance of considering the self-selected nature of data in security reports, particularly in relation to AI models and cloud security solutions. They also explored the vulnerabilities in AI packages and components, the challenges of regulating AI, and the potential risks and benefits of AI in relation to security and regulation. The conversation ended with plans for a recap session the week after, and the team expressed concerns about protecting against AI model vulnerabilities and the need for human oversight in AI systems.

2024-11AISupply ChainVulnerabilitiesConferences
Show 9 discussion topics

Addressing AI Risks in Security Reports

Neil discussed the importance of considering the self-selected nature of data in security reports, particularly when it comes to companies using cloud security solutions. He highlighted the need for security teams to be aware of the rapid adoption of AI models in custom applications by business units, despite potential risks. Neil also shared his personal experience with an AI product that generated inaccurate results, emphasizing the need for security teams to be proactive in addressing these issues. The team agreed to continue the discussion in the next meeting.

AI Package Vulnerabilities and Security

Neil discussed the vulnerabilities in AI packages and components, noting that while they are not currently publicly exploitable, he expects this to change as researchers focus on these areas. He highlighted a specific issue with Amazon Sagemaker, which automatically creates an S3 bucket for storing files, using a default naming convention that makes it easily discoverable. Neil emphasized that many new services, particularly those driven by developers, are not secure by default. He also pointed out that many organizations have exposed access keys to AI services, which could be exploited for malicious purposes.

Secure Design and Default Settings

Neil discussed the importance of secure design and default settings in cloud computing. He highlighted the risks of exposing secrets in Git commit history and the need for a plan to remove them. He also pointed out the vulnerabilities in using Sagemaker with admin privileges and the lack of configuration for IMDSv2 in many instances. Neil emphasized the need for secure practices, such as disabling route access from notebook instances and configuring private endpoints in Azure Openai. He concluded by noting the high percentage of instances without IMDSv2 configuration and the potential for significant security breaches.

Addressing AI Security Risks and Challenges

Neil discussed the challenges and risks associated with AI security, particularly in relation to Azure Open AI accounts. He highlighted that 27% of organizations have not configured their accounts to be privately accessible, making them publicly accessible. He also noted that most organizations are not encrypting their notebooks, training pipelines, or models, making them vulnerable to unauthorized access. Neil emphasized the need for better security measures, such as infrastructure as code, vulnerability management, and secure data practices. He also mentioned the AI Goat project, an open-source initiative that aims to illustrate common security risks in AI applications.

Exploring Cloud Security and Infrastructure

Neil discussed the team's desire for input and pull requests on a project, mentioning that it could be found on the Cloud Security Office hours website. Chris suggested sharing the project with others and noted that it could be a good entry point for those wanting to get hands-on experience. Kyle added that the internet was designed with security and privacy as afterthoughts, necessitating the addition of features like HTTPS and TLS. Richard shared his experience with default passwords in mid-range systems and suggested exploring the infrastructure of the AS-400 series as a potential entry point. Shawn agreed that all this information exists in the cloud and will continue to do so.

Managing System Access and Cybersecurity

The team discussed the challenges of managing system access and security after employees leave an organization. They shared experiences of finding default passwords on firewalls and the difficulty of removing access from multiple systems. The conversation also touched on the use of online resources for learning cybersecurity, with a focus on TryHackMe and HackTheBox Academy. The team agreed that while these resources can provide foundational knowledge, practical experience is essential for deeper understanding. They also emphasized the importance of learning fundamentals first before diving into more advanced topics.

Regulating AI and Vulnerability Challenges

Neil discussed the challenges of regulating AI, noting that existing compliance frameworks like GDPR are often reactive rather than proactive. He gave an example of how PCI certification requirements changed after an incident involving unencrypted credit card transactions. Juninho added that the lack of understanding about AI technology makes it difficult to create effective regulations. Nathaneal asked about the vulnerability of AI systems, particularly in relation to jailbreaking, and Neil clarified that while jailbreaking is a vulnerability, it is not the same as a CVE-style vulnerability. Chris thanked Neil for his talk and encouraged others to introduce themselves. Juninho, who has been with Orca for a couple of months, introduced himself and shared his background in Google Cloud security.

Addressing AI Model Vulnerabilities

Brian expressed concerns about protecting against AI model vulnerabilities, such as model poisoning. He questioned the feasibility of implementing policies to prevent such issues and the difficulty of testing for them. Philippe shared his experience with building a chatbot using LangChain, which was initially difficult to pen test due to the need for multiple attempts for the attack to work. He suggested using observability tools to find prompts and responses, but noted that this method was not real-time and required significant time to find vulnerabilities. Shawn mentioned emerging technology for testing AI models programmatically to identify vulnerabilities. Philippe also discussed the challenges of limiting an AI model's scope to specific tasks, such as mathematical questions, and the weakness of protection mechanisms like prompting techniques. He also shared his experience with Microsoft's OpenAI interpreter, which he found to be vulnerable to certain attacks.

AI Risks, Benefits, and Regulation

In the meeting, the team discussed the potential risks and benefits of AI, particularly in relation to security and regulation. Steve expressed concerns about the security of AI systems and the need for human oversight, while Neil argued that AI is not as dangerous as often portrayed and that evidence-based discussions are needed. Shawn suggested that AI is a natural progression and that regulation is necessary to ensure responsible use. The team also discussed the need for government regulation and the importance of understanding the technology behind AI. The conversation ended with plans for a recap session the week after.

↑ All meeting recaps