10. Appendix — additional resources
While working through SANS training and organizing these notes, I consulted and recommend the following materials for readers who want to go deeper. (Yi, 1/5/2026)
Entries are deduplicated; some titles appear in both industry articles and open-source repos under slightly different names.
Threat modeling & design
- Deciduous — decision tree generator for threat modeling
- Streamlit — common framework for quick ML/security engineering dashboards and prototypes
OWASP cheat sheets (GenAI / ML)
- LLM prompt injection prevention — OWASP Cheat Sheet Series
- Secure AI model ops — OWASP Cheat Sheet Series
- AI agent security — OWASP Cheat Sheet Series
Platforms, pipelines, and MLSecOps
- MLflow — open-source platform for agents, LLMs, and experiment tracking
- LLMOps tools (market guide) — AI Multiple (ML lifecycle tooling; pair with MLSecOps practices and your org’s controls)
- Awesome MLSecOps — curated list (GitHub)
- SAI #21: What is continuous training (CT) in machine learning systems? — example industry article series; substitute your preferred vendor-neutral source if needed
Model tooling, scanners, and robustness
- Trusted-AI Adversarial Robustness Toolbox (ART) — GitHub
- Netron — model visualization
- ModelAudit vs ModelScan: comparing ML model security scanners — Promptfoo (if URL moves, search Promptfoo + ModelAudit + ModelScan)
- 4M models scanned: Protect AI + Hugging Face — Hugging Face / Protect AI collaboration (title paraphrased; see HF blog for current post)
- Unsloth — train and run models locally (efficient fine-tuning)
- ThalesGroup / secure-ml — secure ML framework (requirements, guidelines, tools)
Agents, computer use, and memory
- Security of AI agents — AI Multiple (15 threat themes, OWASP-aligned framing)
- Top agentic AI design patterns for architecting AI systems — Microsoft Azure Architecture Center
- Agentic AI security: what it is and how to do it — AI Multiple (overview; diagrams live on their site)
- Computer-using agent (OpenAI) — OpenAI announcement / product page (URL may update)
- trycua / cua — open infrastructure for computer-use agents (sandboxes, SDKs)
- Memory Bank (Cline) — persistent context patterns for agentic coding assistants
RAG, vectors, and infrastructure
- Powerful comparison: HNSW vs IVF indexing — Pinecone (introductory comparison; vendor-neutral alternatives exist)
- Vector DB comparison: Pinecone vs OpenSearch — example comparison article; pick your preferred vendor write-up
- Cross-service confused deputy prevention (Amazon OpenSearch) — AWS documentation
Signing, supply chain, and BOM
- sigstore / cosign — artifact signing and verification
- Artifact signing and verification with Sigstore Cosign — Sigstore blog (entry point for concepts)
- OWASP AIBOM Generator (Hugging Face Space)
Evaluation, RAG tradeoffs, and training modes
- Full fine-tuning, PEFT, prompt engineering, and RAG: which one is right for you? — representative explainer; many good posts exist
- Large Language Model Evaluation in ’26: metrics & methods — AI Multiple (broad evaluation primer; “steady the course”–style framing appears in many vendor posts)
- Security planning for LLM-based applications — Microsoft (path may change; search “LLM security planning Azure”)
- The ultimate guide to LLM security: risks & practical tips — Promptfoo docs / red teaming (example deep guide)
Concepts & culture (not single links)
- PAL (program-aided language models) — start from the paper: PAL: program-aided language models (arXiv)
- “Ultrathink” in Claude Code — product/workflow terminology; see current Anthropic / Claude Code documentation.
- ANI / AGI / ASI — informal taxonomy of capability levels; treat as conceptual vocabulary, not a standard.
- Workload identity on agentic platforms — see your platform’s IAM docs (a Kagenti identity–style PDF may be vendor-specific training material).
Cybersecurity-focused small language models
- Are small language models the future of cybersecurity AI? — CloudNow (example industry perspective)
- Toward cybersecurity-expert small language models — search arXiv for current papers (topic moves quickly)
Semgrep, MCP, and IDE guardrails
- Semgrep — static analysis; pairs with MCP / IDE workflows for generated code review (see Semgrep docs for MCP server details)
Jailbreaks and misuse research (examples)
- “Do Anything Now” (DAN) / jailbreak families — academic and industry write-ups change frequently; search for DAN jailbreak ChatGPT or OWASP GenAI risk entries for curated context.
- Prefer primary sources (papers, OWASP, vendor security blogs) over orphaned
.mdfilenames without a stable URL.
This appendix is a living list—prune duplicates, fix stale URLs, and add your own org’s internal standards where appropriate.