Cloud Security Office Hours Banner

Friday, April 3, 2026 — Meeting Recap

RSA recap, Claude code leak, "Meeple" talk, AI governance and graph-backed ontologies

— RSA recap, Claude code leak, "Meeple" talk, AI governance and graph-backed ontologies

Quick recap. Cloud Security Office Hours featured discussions about conference experiences, particularly RSA and Black Hat, with participants sharing insights about networking, speaking opportunities, and career development in cybersecurity. The group extensively discussed the recent "Cloud Code leak" revealing Anthropic's backend implementation, analyzing the code quality and architectural decisions. Stryker presented her talk proposal titled "Meeple: Treating People as Game Pieces in Security" which focuses on avoiding single-dimensional thinking when implementing security controls. The conversation also covered best practices for speaking at conferences, with advice for both experienced and junior professionals, and concluded with a detailed technical discussion about AI security governance, including the use of ontologies, taxonomies, and graph databases to improve reliability and reduce hallucinations in AI systems.

2026-04ConferencesAIGovernance
Show 11 discussion topics

Security Risk Assessment Talk Proposal

The meeting began with casual conversation and technical issues, including discussion about a Chinese webcam driver and a humorous thread about a cloud security code leak. The main focus was Stryker presenting a talk proposal titled "Meeple: Building Risks Around Actions, Not Single-Dimensional Roles." Stryker explained the concept of treating people as one-dimensional game pieces rather than complex individuals, which can create security risks. The group discussed whether the title was clear enough, with Neil suggesting adding a definition of "meeple" in the title to help understanding. Stryker outlined the talk's premise about avoiding binary thinking when assessing security risks based on single variables like role or seniority.

Security Approaches and Conference Experiences

The group discussed security approaches and conference experiences. Jay shared his frustration with treating all employees the same from a security perspective, arguing for more tailored approaches based on job roles. Stryker agreed with this approach and emphasized the importance of making experienced professionals think differently about security assumptions. The conversation then shifted to discussing RSA Conference, with participants sharing their different experiences as vendors, participants, and C-suite attendees. Mackenzie, a new recruiter in the field, introduced herself and expressed interest in learning more about the security space.

Cybersecurity Conference Diversity Discussion

The group discussed conference experiences and diversity issues in cybersecurity events. D shared concerns about RSA's oversight in not including a Black affinity group despite having Latino and LGBTQ groups, expressing disappointment with the explanation that there was no space. The conversation then shifted to comparing different cybersecurity conferences, with participants discussing the differences between Black Hat and DEF CON, noting that DEF CON is more inclusive and less corporate-focused, while Black Hat offers more industry-relevant content but at a higher cost. The discussion concluded with reflections on conference networking opportunities and entertainment, including experiences with corporate performances at security events.

Conference Speaking Engagement Strategies

The group discussed strategies for getting selected to speak at industry conferences like RSA. Jay suggested that being selected as a speaker makes it easier to justify attendance to employers since it becomes a business development opportunity rather than just personal networking. Stryker shared her experience getting GEICO to pay for conference attendance through speaking engagements, including her upcoming talks at various events. The discussion emphasized that speakers don't need to be perfectly polished or experienced to present, with several participants sharing that passion and expertise are more important than formal credentials. The conversation concluded with advice for junior professionals like Issam, who expressed concerns about language barriers and experience level, with participants reassuring him that accent and experience level don't impact reception in the industry.

AI Technology and Engineering Discussion

The meeting focused on supporting a participant named Issam who was preparing to give a talk, with Neil and others offering encouragement and reassurance that he didn't need to speak if he wasn't comfortable, and suggesting alternative paths to success in the industry. The group then had an extensive discussion about AI technology, particularly Claude, where they criticized its engineering approach and compared it to "spaghetti code," questioning the wisdom of building complex systems without proper engineering oversight. The conversation included observations about cult-like behavior in the AI community and concerns about the lack of experienced security professionals in AI company leadership roles.

LLM Limitations in Logical Tasks

The team discussed the limitations and challenges of using large language models (LLMs) for logical tasks, particularly highlighting how LLMs often arrive at solutions in illogical ways that they cannot consistently explain. Jay explained their approach to multi-agentic AI systems, where LLMs are used primarily to generate human-friendly responses rather than perform deterministic tasks like flight bookings, which are handled through business connectors. The discussion emphasized that while LLMs have their place in creating natural language outputs, they should not be expected to perform automated, predictable logic tasks that are better suited for traditional systems.

AI Governance and Security Measures

The group discussed AI governance and security measures, with Jay emphasizing the importance of secure-by-design architecture and threat modeling based on business processes rather than just technical aspects. Milos shared his experience implementing taxonomy, ontology, and graph-based solutions to reduce hallucinations in AI systems, particularly in an insurance company in Peru, though Jay noted this approach was similar to their existing methods. The discussion concluded with Jay recommending a four-part series on AI security that includes separating planning from execution and implementing verification mechanisms for critical use cases.

Secure Agentic AI Implementation Challenges

Jay and Milos discussed challenges in implementing secure agentic AI systems, focusing on preventing system failures and managing user privileges. They highlighted the importance of deterministic processes, input validation, and the separation of planning and execution tasks to mitigate risks. Jay mentioned their team's approach of building custom solutions due to limited vendor support and shared insights on threat modeling and security practices in the AI space. The conversation also touched on the difficulty of finding appropriate design partners and the need for accessible documentation on AI security measures.

Graph Databases and Privileges

Milos discussed his work with graph databases, particularly using py.dev as a harness and Neo4J as the preferred graph database. He shared his experience building custom models and the benefits of combining graph databases with semantic layers and ontologies. Jay addressed a question about user privileges in AI systems, explaining the current implementation of service accounts and user delegation, while noting challenges with privilege dropping. Milos suggested using an ontology to define atomic operations and their required privileges, which Jay acknowledged as a feasible approach that could potentially involve temporary role assignments.

AI Security Implementation Strategies

The group discussed approaches to implementing AI systems with appropriate security and access controls. Jay emphasized the importance of time-based or task-based privilege revocation, while Milos suggested applying principles of just-in-time access and least privilege. They explored different frameworks, including Zero Knowledge Trust, and discussed the challenges of balancing security with practical implementation. The conversation also touched on the evolution of AI technology and its potential applications in business operations, particularly in areas like supply chain management and manufacturing.

AI Research and Applications

The group discussed various AI researchers and their approaches, with particular focus on Jan Lacun and his work on world models versus LLMs. Jay expressed preference for Jan's clear communication style and practical approach, while criticizing Jeff Hinton's shift toward more commercial positions. The conversation then shifted to broader AI topics, including concerns about AGI as a goal and discussions about Musk's space-based data center plans, with the group expressing skepticism about the feasibility and practicality of such initiatives. The discussion concluded with reflections on current robotics applications, particularly highlighting warehouse robots and functional non-humanoid robots as more practical and effective than humanoid designs.

↑ All meeting recaps