Rubric → SOC 2 alignment
Rubric → SOC 2 alignment
Purpose
Apply the agentic-engineering rubric in a SOC 2-aware way so rubric-level-2 implementations produce audit-aligned evidence as a natural side effect — no retrofit, no parallel compliance program, no wasted motion. Distinct from other recipes in that it does not introduce a new mechanism; it re-frames existing rubric application for compliance-aware delivery.
The recipe serves client engagements and internal projects where SOC 2 readiness is a real requirement. It is not a certification path — audit engagement, evidence collection, and the CC1 / CC3 / CC9.2 concerns the rubric does not cover remain separate work. What this recipe provides is the rubric-adjacent half of that work, made visible and operational.
Architecture
Three layers:
- Alignment surface — the 14 rubric-level-2 ↔ TSC-control mappings where applying the rubric produces audit-ready evidence directly (tabled below).
- Operational flow — scoping, application, evidence-capture, gap-bridging. How the recipe is actually used on a project.
- Out-of-scope markers — what the rubric does not cover for SOC 2 readiness, so the project brings complementary instruments for those concerns rather than assuming the rubric handles them.
Alignment surface
Rubric-level-2 on these criteria produces evidence directly usable in a SOC 2 engagement. Direct = same concern, close wording; adjacent = overlapping concerns, complementary not redundant; partial = one covers a subset of the other.
| Rubric criterion | TSC control(s) | Alignment quality | What rubric-level-2 covers | What’s still needed for audit |
|---|---|---|---|---|
PL4-least-privilege | CC6.3 | direct | Strict least-privilege; write requires explicit elevation | Documented elevation policy, logged elevation events, quarterly access review sign-off |
PL4-branch-protection | CC8.1 | direct | Protected branches; human approval on merges; bypass audited | Documented change-management policy naming the protected-branch rules; approval evidence retained for audit period |
PL2-secret-hygiene | CC6.1 (keys), CC6.7 (transmission) | direct | New leaks blocked; history clean; keys rotated | Key rotation schedule documented; rotation events logged; crypto-module inventory |
PL2-sast-dast | CC4.1, CC7.1 | direct | Both tools; tuned; suppressions accountable with rationale + reviewer | Scan cadence documented; findings and suppressions retained for audit period |
PL2-external-pr-review | CC8.1 (Approves System Changes) | direct | Layered: agent pre-review + human glance | Approval evidence per change; reviewer identity retained; review-to-merge lineage auditable |
PL2-agent-audit-trail | CC4.1, CC7.1 | direct | Decision-level reasoning logged; queryable; reversible | Retention policy for audit logs covering the audit period (typically 12 months) |
PL2-load-stress-testing | A1.1 | direct | Run on production-mirrored env, scheduled | Test cadence documented; capacity forecasts retained; breach response evidence |
PL5-change-sets | CC8.1 (Documents Changes, Tracks System Changes) | direct | Automated change sets with release notes | Release-notes retained for audit period; lineage from change set → deployed state auditable |
PL4-environment-isolation | A1.2 | adjacent | Isolated staging/prod with parity-checking; on-demand production-mirrored replica | Environmental-protection evidence (power, climate, network); backup and recovery test results |
PL4-pii-masking | C1 (Confidentiality), P4 (Privacy — retention, use, disposal) | adjacent | Enforced at DB and telemetry layers; agent and logs cannot see raw PII | PII inventory, classification policy, disposal procedures documented; combined with the engineering-PII boundary stance (see Philosophy §Scope and boundary) produces a stronger posture than C1/P4 baselines require |
PL4-release-strategy | CC8.1 (controlled deployment), A1.2 | adjacent | Percentage rollouts with metric gating; agent platform-bounded | Rollout policy documented; metric-gate thresholds retained; rollback test evidence |
PL4-agent-invokable-rollback | CC7.5 (recovery), CC8.1 POF Provides for Changes Necessary in Emergency Situations | adjacent | One-command, agent-callable rollback | Rollback procedure documented; emergency-change policy named; post-rollback review evidence |
PL3-emission-quality + PL3-agent-queryability | CC7.1, CC7.2 | adjacent | Structured logs with pseudonymous correlation IDs; agent directly queries logs, metrics, traces | Monitoring policy documented; detection-procedure inventory; anomaly-response evidence |
PL5-pipeline-reliability + PL5-cicd-pipeline-health | CC8.1, CC7.1 | adjacent | Reliable pipeline with agent-driven transitions; fast, reliable, agent-readable logs | Pipeline-incident response evidence; infrastructure-failure vs. real-failure distinction retained |
Operational flow
During client scoping:
- Identify SOC 2 category scope — Security always; confirm which of Availability / Processing Integrity / Confidentiality / Privacy are in scope based on the client’s commitments to their customers.
- Map against the alignment surface — identify the subset of the 14 alignment rows that correspond to categories in scope (e.g. Availability in scope →
PL2-load-stress-testing,PL4-environment-isolationbecome load-bearing). - Identify gaps — name the TSC concerns not covered by the rubric (CC1 governance, CC3 formal risk assessment, CC9.2 vendor risk, CC7.3–CC7.5 incident response, Privacy lifecycle if Privacy in scope). Surface these as out-of-rubric work requiring complementary instruments.
During project delivery:
- Apply rubric level-2 with evidence capture — for each alignment-row criterion, implement level-2 with audit-evidence retention built in (log retention aligned to audit period, approval evidence retained, change sets archived). Evidence capture is not a retrofit; it is the difference between “our controls exist” and “we can prove our controls operated effectively over the audit period.”
- Maintain the engineering-PII boundary — apply
PL4-pii-masking,PL4-memory-safety,PL4-prompt-injection-defence, and ingestion discipline inPL1-real-world-feedbackandPL3-emission-quality(the five criteria named in Philosophy §Scope and boundary). This produces a stronger-than-baseline posture on C1 / P4.
During audit preparation:
- Document the rubric-side coverage — reference this recipe; produce a one-page coverage matrix for the audit (filtered to the categories in scope).
- Address the out-of-rubric concerns — surface the gap items named in step 3 as the scope of complementary work: formal risk-assessment deliverables (CC3), vendor-risk register (CC9.2), incident-response runbooks and testing (CC7.3–CC7.5), governance attestations (CC1).
Out of scope
Explicitly not covered by this recipe, and therefore not covered by applying the rubric:
- CC1 — Control Environment (personnel, governance, board oversight, commitment to integrity, competence, accountability). Organisational territory; needs HR / executive instruments.
- CC3 — Risk Assessment (formal entity-level risk assessment). Under consideration as a rubric concern (see rubric Open Questions); today requires complementary work.
- CC9.2 — Vendor and business-partner risk. Under consideration as a rubric concern (see rubric Open Questions); today requires a third-party-risk register and vendor-management process.
- CC7.3–CC7.5 — Incident response as a first-class capability (event evaluation, response execution, recovery activities). Under consideration as a rubric concern (see rubric Open Questions); today requires incident-response runbooks, assigned roles, and periodic testing.
- Privacy P3–P8 beyond masking — personal-data lifecycle (collection, retention, disposal, data-subject rights, disclosure records). Under consideration as a rubric concern (see rubric Open Questions); today requires privacy-program work separate from the rubric.
- Certification mechanics — auditor engagement, evidence collection discipline beyond what the rubric’s audit trail produces, Type 1 vs Type 2 assertion preparation. Audit-firm territory.
The rubric gives you the engineering-side half. The other half is organisational / procedural / audit-firm work.
Criteria advanced
This recipe does not advance rubric criteria directly; it is a diagnostic that re-frames application of existing criteria for compliance-aware delivery. The criteria are advanced by their own implementations (via existing recipes like bot-token credential tenancy, GitOps JIT privilege elevation, indexed per-entry registry); this recipe tells you which of those implementations produce SOC 2-aligned evidence and what audit-side work remains.
Prerequisites
None structural. The recipe assumes the rubric itself is being applied; it does not introduce new engineering work beyond the audit-evidence-capture discipline embedded in the operational flow.
Failure modes
- Treating the recipe as a certification path. It is not. It is the engineering-side half of SOC 2 readiness, not the whole. Attempting to pass SOC 2 with only this recipe’s coverage and without the out-of-scope work produces a findings-heavy report.
- Skipping evidence capture during delivery. Rubric level-2 without audit-evidence retention is “controls exist today”; SOC 2 Type 2 requires “controls operated effectively over 6–12 months.” Retention is part of the operational flow, not a retrofit.
- Assuming category scope is universal. If Privacy is not in the client’s commitments, the Privacy-adjacent alignment rows are not load-bearing and shouldn’t be presented as SOC 2 work. Scope matters; apply the recipe to the subset that actually counts.
- Rubric drift. Rubric changes may strengthen or weaken alignment on specific rows. When the rubric bumps, update this recipe’s
rubric_versionand verify each alignment row still holds. Staleness here would misrepresent SOC 2 coverage to clients. - Treating “adjacent” as “direct.” Adjacent alignments cover overlapping concerns but not identical ones; the “what’s still needed for audit” column is longer for adjacent rows and must be planned for.
Open design questions
About this recipe itself
- Evidence-capture patterns worth extracting as sub-recipes. Log-retention aligned to audit period; approval-evidence preservation; change-set archival. Candidates for future recipes if any of these prove non-trivial in practice.
- How does this recipe compose with the (future) ISO 27001, NIST AI RMF, ISO/IEC 42001 alignment recipes? Sibling recipes with the same shape? Or a meta-recipe that composes across frameworks? Likely sibling until cross-framework patterns emerge.
- Level-by-level readiness mapping. This recipe maps rubric-level-2 to TSC controls. Should there be a level-3 alignment layer too (what compounding-level delivery produces above and beyond audit-readiness)? Probably yes eventually; deferred for first deployment.
- Level-by-level readiness cross-walk with an auditor partner. Move from “rubric-level-2 corresponds to TSC control X” (this recipe) to “rubric-level-2 produces evidence sufficient for SOC 2 Type 2 readiness on TSC control X, contingent on A, B, C.” Requires auditor judgement. Candidate engagement when SOC 2 readiness becomes a live commitment for an Apptivity Lab project.
Compliance-adjacent questions surfaced by the research (may or may not be universal)
These concerns surfaced in the SOC 2 coexistence research as TSC territory the rubric does not currently address. They are captured here rather than in the rubric’s Open Questions because their motivation is SOC 2 alignment — and the rubric should not assume stakeholders have SOC 2 as a goal. If any proves genuinely universal for agentic engineering via independent motivation (not via SOC 2 citation), it can be raised as a rubric open question from the rubric’s own perspective at that point.
- Formal risk assessment. SOC 2’s CC3 is four criteria deep on formal risk assessment (objectives with sufficient clarity, risk identification across entity levels, fraud risk, change-driven risk). The rubric assumes risk thinking happens at project / portfolio governance. Whether this should become a rubric criterion, or stay with complementary governance instruments, is a question this recipe does not resolve — projects pursuing SOC 2 need to carry this work regardless.
- Third-party / vendor risk management. SOC 2’s CC9.2 covers vendor requirements, vulnerability evaluation, risk tiering, termination with data return, privacy/confidentiality commitments from vendors. Modern agentic stacks are heavily third-party-dependent (cloud providers, AI API vendors, MCP authors, SaaS integrations), but the rubric has no criterion for this. For this recipe’s purposes: projects pursuing SOC 2 need a vendor-risk program, independent of rubric adoption.
- Incident response as a first-class capability. SOC 2’s CC7.3–CC7.5 covers event evaluation, response execution (containment, remediation, communication, recovery), and recovery-activities development. The rubric scores detection (
PL3-emission-quality,PL3-agent-queryability) and postmortem-driven improvement (PL2-agent-audit-traillevel-3) but not the response phase. Projects pursuing SOC 2 need incident-response runbooks, assigned roles, and periodic testing — this recipe names the gap; complementary instruments fill it. - Personal-data lifecycle beyond the engineering-PII boundary. The rubric holds the engineering-PII boundary (PII does not reach engineering surfaces) but does not score the production-side lifecycle of personal data: collection consistency with purpose, retention aligned to commitments, secure disposal, data-subject rights (access, correction, erasure), disclosure records. Mostly absorbed by the engineering-PII boundary if that principle is held in production too; privacy-program work separate from the rubric otherwise.
- Rubric-on-rubric soundness test. SOC 2’s TSC is evaluated against a four-attribute test (Relevance, Objectivity, Measurability, Completeness; SSAE para .26) — auditors apply this to the criteria themselves. The rubric has no equivalent self-check. Candidate future discipline: a quarterly rubric-soundness assessment (candidate meta-metric “Rubric Soundness Index”), applied to the rubric by rubric maintainers. Methodologically useful regardless of whether any given project pursues SOC 2; captured here because the SOC 2 SSAE framing is what surfaced it.
Case studies
None yet. Recipe status is proposed pending first client-engagement use. Promotes to proven after the first deployment, at which point case-study narrative lands here: which rows were invoked, which gaps surfaced, what evidence formats worked, what didn’t.
Related recipes
- Instances of the alignment surface — bot-token credential tenancy realises the
PL4-least-privilege↔ CC6.3 row at the integration layer; GitOps JIT privilege elevation realisesPL4-least-privilege+PL4-branch-protection+PL2-external-pr-review+PL2-agent-audit-trail+PL4-agent-invokable-rollbackas a composite mechanism that produces SOC 2-aligned evidence across five rows simultaneously; indexed per-entry registry realises the corpus-taxonomy substrate useful for evidence retention. - Source research — the alignment surface is extracted from
research/soc2-tsc-rubric-mapping.md. When the research updates, this recipe updates. - Future sibling recipes — ISO 27001, NIST AI RMF, ISO/IEC 42001 alignment recipes will share this recipe’s shape. Differences in framework structure may surface; the shape is a template, not a strict schema.