AI News and Media

Thought Leadership in AI Security
  • Agentic AI Security and The Case for Microagents - Microagents are smaller, harder, less valuable targets for bad actors that have a smaller blast radius. Properly implemented, microagents reflect a defense-in-depth approach, follow Zero Trust principles, and can support granular data access control. Considering data security in the planning process makes it possible to trace the flow of data in microagent workflows.

  • Defense Against the Dark Arts: Distillation and Data Theft - Integrating strong conventional security controls and AI model-specific defenses can protect models from a wide range of cyberattacks. But neither conventional security controls nor currently available model-specific defenses can guarantee that a model will not be stolen via distillation. From a cybersecurity standpoint, a defense in depth approach, while necessary, is not sufficient.

  • 7 Dangerous Assumptions About AI Security - As with any other cloud service, cloud providers cannot guarantee the security of customer applications or data. Companies that fine-tune models or use RAG with sensitive data assume the associated risks. Using internal data in AI models, RAG applications, or AI agents increases insider threat risks and risks associated with compromised credentials.

  • Open Source AI vs. AI Model Openness - Rather than relying on imprecise categorizations (either open source or not open source), users of AI models should make their own data-driven decisions. Measuring AI model openness might be more effective than fitting AI models into existing definitions of open source, or altering the definition of open source.

  • Can the AI Industry Regulate Itself? - By proactively investing in responsible AI, US-based AI companies are, in effect, self-regulating. Whether self-regulation by industry players is sufficient to ensure adequate AI safety and security for US citizens remains to be seen. What is clear is that AI investment, innovation, and technical advances in the US will continue in the foreseeable future.

  • AI Models: You Break it, You Buy It - Components of responsible AI (e.g., security, safety, privacy, fairness, accuracy and trustworthiness, human-control, human-centric design, explainability, transparency, etc.) are different to one another, require different tools to test, and may be addressed at different stages of the MLOps pipeline and model or application software development lifecycle.

Video Recordings
  • Responsible AI Technical Requirements - Video recording of presentation on ethical and responsible AI with analysis of technical requirements for AI models and applications from IEEE Consultants' Network of Silicon Valley (CNSV).

Slide Presentations