asap

Architect Support Agentic Platform

Tracking

Field Value
Document ID AGP-001
itemType PlatformDesign
slug architect-support-agentic-platform
Version 0.5.0
Created 2026-04-18
Last Reviewed 2026-04-18
State Draft
retentionPolicy indefinite
Freshness SLA 180 days
Owner PER-01 Lena Brandt, Chief Architect
Approver PER-11 Anja Petersen, Chair EARB
Dependencies SpecLang-Design.md, Spec-Type-System.md, SpecChat-BTABOK-Implementation-Plan.md, MCP-Server-Integration-Design.md, BTABOK-Out-of-Scope-Models.md, SpecChat-Design-Decisions-Record.md
New repo E:\Archive\GitHub\dlandi\asap (established 2026-04-18). This document is the founding design
Working product name Architect Support Agentic Platform (ASAP)
SpecChat relationship Dependency, not host. SpecChat is a consumed service

0. Origin and Scope Decision

This document began as a SpecChat design artifact and grew into the design for a distinct application platform. The scope divergence (practitioner and practice as unit of concern; deployed service with stores, webhooks, and schedulers; audience beyond spec authors; data governance and privacy gates) exceeded what SpecChat-the-spec-language should carry.

Decision (2026-04-18): this design becomes the basis for a new repository with its own application architecture. Working name: Architect Support Agentic Platform (ASAP). SpecChat is a dependency, not a host. ASAP consumes SpecChat’s profile, validator, and canvas-rendering services through the MCP server; SpecChat is unaware of ASAP.

The fork point for shippable deliverables is at the end of Wave 4. Waves 1 through 4 remain SpecChat proper (profile, enforcement, governance mechanics, rendering). Waves 5 and 6 are ASAP deliverables. This boundary is marked in Section 9.

Subsequent revisions of this document will update cross-references, move ASAP-specific open questions to a separate section, and harden the SpecChat dependency contract.

1. Purpose

This document defines a complete agentic support platform for an architecture practitioner working within the BTABoK.

The platform serves the practitioner across all four BTABoK models: Engagement, Value, People, and Competency. Engagement Model support is backed by SpecChat’s in-scope BTABOK profile and its validators. Value, People, and Competency Model support is delivered agentically through the advisory, knowledge, and automation layers of ASAP, without extending the SpecLang profile boundary.

The scope discipline of BTABOK-Out-of-Scope-Models.md is preserved. The three out-of-profile models are out of scope for the SpecLang profile. They are in scope for ASAP. The distinction is load-bearing. Profile extension adds validators, concept types, and schema enforcement to a spec collection. Platform support adds agentic guidance, retrieval, orchestration, and automation around a practitioner’s work without modifying the spec language.

This v0.5 revision is grounded in a direct read of the IASA BTABoK corpus and formalizes the separation from SpecChat. BTABoK’s own framing shapes the content of Sections 3, 7, and 8.

2. Framing: Three Axes

A complete agentic platform is best understood as a cube across three axes.

Axis A. The practitioner lifecycle. Eight stages:

  1. Discovery (interviews, concern mining, domain scan, Jobs-to-be-Done, empathy mapping)
  2. Authoring (concept-by-concept spec creation)
  3. Governance (decisions, waivers, review bodies)
  4. Validation (profile validators, severity policy, migration)
  5. Rendering (canvases, diagrams, stakeholder views)
  6. Operation (runtime drift, scorecard refresh, metric reconciliation, Rapid Value Management)
  7. Audit (retention, trail reconstruction, evidence packets)
  8. Learning (onboarding, terminology, career progression, mentoring)

Axis B. The agentic primitive menu. Ten primitives:

  1. MCP tools (deterministic computation)
  2. Hooks (harness-enforced behaviors)
  3. Scheduled tasks (cron, interval, calendar)
  4. Remote triggers (PR, webhook, CI)
  5. Subagents (specialized, context-isolated workers)
  6. Slash commands (human entry points)
  7. Skills (advisory authoring aids)
  8. Preview and rendering (visual output)
  9. Knowledge retrieval (RAG over BTABoK and adjacent corpora)
  10. Worktrees (isolated sandboxes)

Axis C. BTABoK model coverage. Four models, each with a distinct agentic signature:

  1. Engagement Model. Spec-authoring heavy. Profile-backed. Enforcement available. Organizes the six-stage Architecture Development Life Cycle (ADLC).
  2. Value Model. Planning, portfolio, and outcome heavy. Interwoven with Engagement in BTABoK’s own framing (“the engagement model is the how, the value model is the why”). Advisory only.
  3. People Model. Team, role, community, and managed-career-path heavy. Integration-heavy (directory, HR). Privacy-bounded. Advisory only.
  4. Competency Model. Taxonomy of nine pillars and five proficiency levels. Knowledge-retrieval heavy. Advisory only.

Every platform feature sits in a cell of this cube. Section 6 lays out the lifecycle-by-primitive matrix. Section 7 walks through each model’s agentic surface using BTABoK’s own vocabulary.

3. Authority Gradient

Agentic features vary in the force they apply to a practitioner. The platform makes the gradient legible.

Tier Force Primitives Example
Enforce Blocks the action Hooks, MCP validators in strict mode Commit refused on Error severity diagnostic
Gate Requires acknowledgment Slash commands, PR status checks Reviewer must approve the waiver chain before merge
Advise Offers and argues Skills, subagents Value Model skill proposes benefits-realization fields
Inform Provides facts MCP tools, RAG retrieval, preview Canvas renderer returns the projected view

A feature should choose its tier deliberately. Enforcement earns trust when it is narrow and correct. Advice retains the practitioner’s agency.

BTABoK itself is explicit about governance as a spectrum rather than a uniform enforcement layer: “Architects should be governed, not doing the governing” and “governance should enable agility, not become a police state.” The platform’s enforcement tier must honor this. Strict validation is appropriate for the spec language and the BTABoK profile rules that a collection has opted into. It is not appropriate as a general posture for BTABoK practice guidance.

Model coverage interacts with the gradient. Enforce-tier features are available only for the Engagement Model because only the Engagement Model has a backing profile with validators. Value, People, and Competency features operate at the Advise and Inform tiers only. This is deliberate and permanent. A platform that enforces Value Model judgment on a collection has quietly become a Value Model profile, which v0.1 scope discipline forbids.

4. Platform Components

The platform is ten layered components. Each has a bounded responsibility and a defined interface.

4.1 Deterministic core (MCP tools)

The factual backbone. Never summarized by a model.

Existing: validate_collection, project_canvas, migrate_to_btabok, get_supported_versions, get_deprecation_schedule.

Proposed, Engagement-aligned: resolve_weakref, check_freshness_sla, expand_waiver_chain, diff_transition_architecture, simulate_severity_policy, assemble_earb_packet, render_roadmap, detect_drift, export_audit_trail.

Proposed, Value-aligned: compute_benefits_dependency, score_tech_debt_portfolio (using BTABoK’s Technical Debt Ratio: Remediation plus Maintenance over Development times 100), evaluate_investment_tradeoff, refresh_scorecard, run_rvm_check (Rapid Value Management early or leading indicator check).

Proposed, People-aligned: compute_role_coverage, list_governance_body_members, summarize_team_topology, check_mentoring_coverage, compute_rotation_status.

Proposed, Competency-aligned: list_competency_areas (across the 9 pillars and 80-plus areas), assess_team_coverage, map_cita_path (across the 4 certifications Foundation, Associate, Professional, Distinguished), compute_proficiency_gap (against the 5 proficiency levels).

Authority tier: Inform as queries, Enforce when wired through hooks.

4.2 Enforcement layer (hooks)

Hooks run outside the model and cannot be argued with. Engagement Model only, because only Engagement has validators.

Candidates:

Authority tier: Enforce.

4.3 Automation layer (scheduled tasks)

BTABoK is calendrical. Scheduled tasks own the calendar across all four models.

Engagement-aligned:

Value-aligned (the Rapid Value Management cadence is explicit in BTABoK):

People-aligned:

Competency-aligned:

Authority tier: Inform, with Gate escalation for notifications that require acknowledgment.

4.4 Integration layer (remote triggers and CI)

Spec change rarely happens in isolation.

Candidates:

Authority tier: Gate for the PR status check, Inform for the others.

4.5 Specialist workers (subagents)

Subagents carry their own system prompt and context. They are the right home for multi-step work.

Engagement-aligned:

Value-aligned:

People-aligned:

Competency-aligned:

Authority tier: Advise. None of these has commit or merge authority on governance artifacts.

4.6 Human entry points (slash commands)

Existing: /spec-btabok.

Engagement-aligned: /spec-new <type>, /spec-review, /spec-waiver, /spec-canvas <name>, /spec-freshness, /spec-earb-prep, /spec-migrate, /spec-adlc <stage>.

Value-aligned: /spec-value-draft, /spec-scorecard, /spec-techdebt, /spec-nabc (business case drafter), /spec-rvm (Rapid Value Management setup).

People-aligned: /spec-team-topology, /spec-role-coverage, /spec-mentor-match, /spec-culture-diagnostic.

Competency-aligned: /spec-competency-assess, /spec-cita-plan, /spec-team-capability, /spec-skills-gap (Architect Skills Gap Analysis).

Authority tier: Gate.

4.7 Advisory layer (skills)

Engagement-aligned:

Value-aligned:

People-aligned:

Competency-aligned:

Topic-area lenses (cross-cutting, brief per BTABoK’s own treatment): spec-topic-ai, spec-topic-cloud, spec-topic-security, spec-topic-sustainability, spec-topic-devops, spec-topic-integration, spec-topic-agile. These are context-aware lenses, not full frameworks. The skill wording must match BTABoK’s own positioning (“starting points for awareness”).

Authority tier: Advise.

4.8 Rendering and communication (preview)

Engagement-aligned:

Value-aligned:

People-aligned:

Competency-aligned:

Authority tier: Inform.

4.9 Knowledge layer (RAG)

The highest-leverage investment for out-of-profile model support. Separate indexes per domain, all queryable from skills, subagents, and slash commands.

Retrieval replaces “encode into the profile” with “retrieve at authoring time” for any content that does not belong in the spec language.

Authority tier: Inform.

4.10 Shared infrastructure

Authority tier: Infrastructure, no direct user contact.

5. Cross-Cutting Architecture Concerns

5.1 Profile awareness

Every layer reads the active profile from ProfileContext at invocation time. Enforcement is Engagement Model only. Advisory features read ModelContext to tailor guidance to the model the practitioner is currently engaged with.

5.2 Scope boundary preservation

The platform does not widen the SpecLang profile boundary. Value, People, and Competency support is delivered through RAG, advisory skills, subagents, and scheduled tasks. No validator, concept type, or schema rule for these models enters a collection’s active profile.

When an out-of-profile interaction produces an artifact the practitioner wants to keep, the platform writes it to a sibling folder (value-model/, people-model/) and references it with weakRef from the spec collection. Competency data is a deliberate exception: it lives in the practitioner-scoped CompetencyStore, never in a collection.

5.3 Authority legibility

The practitioner must always be able to tell which tier a feature is operating in. Model coverage is also visible: a Value Model skill identifies itself as Value Model and as advisory.

5.4 Retention and audit trail

The agentic trail extends BTABoK retention policy. The RetentionSink is the single destination. Trails for People and Competency features pass through the PrivacyGate.

5.5 Governance judgment boundary

Mechanics can be automated. Judgment cannot. The platform assembles review packets, surfaces risks, drafts candidate entries. It does not approve waivers, close decisions, sign off on ASRs, award certifications, or adjust compensation bands.

BTABoK is especially clear on this at the Engagement and People boundaries. The governance reviewer subagent is a mentor, not a police officer. The mentoring matchmaker proposes pairings; it does not assign them. The team capability-gap analyzer produces aggregate views; it does not recommend HR actions. These constraints are not optional.

5.6 Privacy boundary

People and Competency data is sensitive. The PrivacyGate mediates access. Defaults:

5.7 Grounding and source authority

Content that contradicts IASA BTABoK authoritative material is a defect. Skills and subagents that cite practice guidance must ground in the RAG index and cite source. Each model index carries a freshness SLA of 180 days.

The canvas index is a special case: BTABoK canvases explicitly cross-reference competencies. The RAG layer preserves those cross-references so a canvas query can surface the relevant pillars, and a competency query can surface the canvases that exercise it.

5.8 Failure modes and guardrails

6. The Lifecycle-by-Primitive Matrix

A marked cell is a strong fit. Unmarked is not a prohibition.

Stage / Primitive MCP Hook Sched Remote Subagent Slash Skill Preview RAG Worktree
Discovery         X X X   X  
Authoring X X     X X X X X  
Governance X X X X X X     X  
Validation X X X X   X       X
Rendering X         X   X    
Operation X   X X X     X    
Audit X X X     X        
Learning           X X   X  

Worktrees are reserved for Migration Pilot and bulk refactor work in the Validation stage.

7. Agentic Support by BTABoK Model

Each model is served by a distinct mix of the ten primitives. This section uses BTABoK’s own vocabulary wherever possible.

7.1 Engagement Model

Status: in scope for the BTABoK profile. Fully supported across all four authority tiers.

BTABoK framing. The Engagement Model is the operating framework that describes how architecture practices execute work across the full lifecycle. Its core workflow is the Architecture Development Life Cycle (ADLC) with six iterative stages: Innovate, Strategy, Plan, Transform, Utilize and Measure, Decommission. The practice of architecture in BTABoK’s own words is outcomes-focused (“the outcome of the work is more important than the method of achieving that outcome”) and customer-obsessed (“Stop Doing Architecture, Start Digitally Enhancing Your Customer”).

Characteristic work. ADLC stage entry and exit; architecturally significant decisions (ASDs) with traceability; ASR cards; waiver chains; governance bodies operating as a spectrum from lightweight mentoring to rigorous Tier 1 review; principles as 8-12 memorable guardrails; viewpoint cards; transition architectures; roadmaps.

Primary primitives. MCP tools, hooks, subagents (governance reviewer, migration pilot, decision scribe with Decision Bias Calibrator, ADLC stage coach), slash commands, preview (canvases, C4, roadmap, ADLC view), skills (spec-btabok, spec-adlc, spec-jtbd).

Enforcement available. Yes. The BTABoK profile’s 13 validators and the core 10 validators run through hooks.

Named BTABoK workflows the platform automates:

Delivery dependency. SpecChat-BTABOK-Implementation-Plan.md Phases 1 through 4 underwrite the profile the enforcement layer depends on.

7.2 Value Model

Status: out of profile, in platform. Advisory and Inform tiers only.

BTABoK framing. Value Model is interwoven with Engagement. The platform reflects this: Value features often run adjacent to Engagement features rather than in separate sessions. BTABoK puts it plainly: “the engagement model is the how, the value model is the why.”

Characteristic work. Objectives as SMART key results; investment planning with demand shaping; tech-debt portfolios measured by Technical Debt Ratio (Remediation plus Maintenance over Development times 100, managed as a healthy payback schedule); value streams; Rapid Value Management with three tiered indicators (early validation 30-90 days, leading 91 days to one year, lagging at end-state); principles as value-driven guardrails; risk methods; structural complexity analysis.

Primary primitives. Skills (spec-value, spec-benefits-realization, spec-value-stream, spec-investment, spec-rvm, spec-principles), subagents (benefits-realization modeler, value-stream mapper, tech-debt portfolio analyst, investment prioritizer, business case drafter, RVM coach), scheduled tasks (monthly RVM early check, quarterly leading review, annual lagging review, weekly scorecard, quarterly tech-debt portfolio, annual investment and OKR prompt, annual principles review), preview (benefits dependency network, value-stream map, scorecard dashboard, tech-debt quadrant, RVM timeline), RAG Value index, MCP tools (compute_benefits_dependency, score_tech_debt_portfolio, evaluate_investment_tradeoff, refresh_scorecard, run_rvm_check).

Named BTABoK workflows the platform automates:

Partial overlap with the profile. MetricDefinition and ScorecardDefinition exist in the Engagement profile. Value Model scheduled tasks read these to drive scorecard refresh without profile change. This is the cleanest pattern for platform-profile collaboration.

Output placement. Sibling value-model/ folder, referenced by weakRef from the spec collection.

7.3 People Model

Status: out of profile, in platform. Advisory and Inform tiers only. Privacy-bounded.

BTABoK framing. The People Model defines the structure, reporting, and administration of the group of internal architects, extended team members, and external influences shaping an architecture practice. It treats architecture as a profession requiring external competency validation, not merely employer-defined roles.

Characteristic work. Organization (federated, centralized, value-stream); BIISS specializations (Business, Information, Infrastructure, Software, plus Solution as generalist); managed career path across six levels (Aspiring, Foundation, Associate, Professional, Distinguished, Chief); extended team (non-titled practitioners doing architecture work); mentoring as required for Professional-plus progression; community of practice; culture (Westrum Culture Diagnostic, Culture Map); role definitions; mindset.

Primary primitives. Skills (spec-people, spec-team-topologies, spec-role-definition, spec-career-path, spec-mentoring, spec-extended-team), subagents (team topology analyst, role coverage mapper, community health observer, mentoring matchmaker, culture diagnostic runner), directory and HR webhooks, scheduled tasks (team topology review, succession scan, governance rotation, architect rotation tracking, Engagement Model Steering Committee prompts), preview (team topology map, role coverage heatmap, governance composition, career progression ladder, mentoring network), RAG People index, MCP tools (compute_role_coverage, list_governance_body_members, summarize_team_topology, check_mentoring_coverage, compute_rotation_status).

Named BTABoK workflows the platform automates:

Partial overlap with the profile. Core SpecItem metadata uses PersonRef for authors, reviewers, committer. The People Model platform layer resolves these into organizational context (team, BIISS specialization, career level, reporting chain) without embedding that context in the spec.

Privacy. Primary concern throughout. PrivacyGate mediates every feature. BTABoK itself signals relational sensitivity: reporting structure changes, mentoring relationships, community conflict, and extended-team credibility are all flagged as requiring human judgment before acting. The platform respects this.

Output placement. Sibling people-model/ folder, referenced by weakRef where useful.

7.4 Competency Model

Status: out of profile, in platform. Advisory and Inform tiers only. Strongly privacy-bounded.

BTABoK framing. The Competency Model is the professional development substrate. Platform support for this model is practitioner development, not spec authoring. The separation is intentional and permanent.

Structure:

Primary primitives. Skills (spec-competency, spec-cita-prep, spec-learning-plan), subagents (competency self-assessment coach, certification path planner, team capability-gap analyzer, Architect Skills Gap Analysis runner, peer-assessment orchestrator), RAG Competency index, scheduled tasks (quarterly capability scan, CITA maintenance prompts, certification expiry, mentoring checkpoints, annual learning-plan refresh), preview (competency radar across the nine pillars, team capability heatmap, CITA timeline, proficiency matrix), MCP tools (list_competency_areas, assess_team_coverage, map_cita_path, compute_proficiency_gap), HRIS integration webhook.

Named BTABoK workflows the platform automates:

No overlap with the profile. By design. The platform layer never puts competency data into a spec.

Privacy. Maximum. Self-assessment data is practitioner-owned and stored in CompetencyStore, not in collections. Team aggregations enforce the small-team threshold. Certification data may be surfaced with authorization. No subagent recommends HR actions (compensation, promotion, termination, hiring). The platform supports preparation and path planning; it does not grant certifications.

Output placement. Practitioner-scoped CompetencyStore, distinct from collections.

7.5 Cross-Model Topic Areas

BTABoK topic areas (AI, Cloud, Security, Integration, DevOps, Sustainability, Systems, Agile) are awareness-level contextual guides, not full frameworks. BTABoK itself describes them as “starting points where topic areas provide context for IT architects to be aware of.” The platform honors this framing.

Implementation: topic-area skills (spec-topic-<area>) invocable in any model context. Each reads ModelContext and tailors guidance to the model currently engaged. The skills are brief. They surface relevant canvases and competencies and point to deeper IASA references. They do not pretend to be authoritative on cloud, security, or AI.

8. Human-in-the-Loop Role Architecture

Sections 4 and 7 specify what each platform component does. This section specifies what each component is not allowed to do, keyed to the human roles BTABoK names in the three out-of-profile models. It makes the authority gradient of Section 3 concrete at the role level rather than the feature level.

8.1 The AI-Suitability Gradient

Roles across the Value, People, and Competency models sort into four bands:

  1. AI-strong. Drafting artifacts, running cadences, aggregating data, coaching walkthroughs, surfacing evidence. AI carries most of the work. A named human confirms.
  2. AI-backstage. Preparing material for a human role (mentor packets, manager calibration aids, hiring interview scorecards). AI never surfaces in the primary relationship. It prepares the human.
  3. AI-proposer. Matchmaking, rotation, ranking against rubrics. AI produces proposals. Humans accept, modify, or reject each one.
  4. AI-excluded. Performance rating, mentor judgment, certification, compensation, hiring, firing, funding allocation. AI has no authority here and must not produce output that reads as a recommendation.

Every subagent, MCP tool, scheduled task, and skill is assigned to exactly one band. The assignment is a property of the feature’s design, not a runtime decision.

8.2 The HITL Pattern Language

Seven safeguard patterns recur across role-level analysis. The delivery plan references these by name.

Pattern Description Example
Draft-and-review AI drafts; named human owns the artifact Decision scribe produces a DecisionRecord; architect signs
Assemble-and-present AI gathers inputs; human interprets and acts Governance reviewer assembles the EARB packet; chair decides
Propose-and-confirm AI suggests an action; human confirms each one Mentoring matchmaker proposes pairings; mentor and mentee confirm
Monitor-and-alert AI watches signals; human decides response RVM coach flags deviation; benefit owner decides continue, restructure, or decommit
Coach-and-capture AI guides a practitioner-owned process; data stays with the practitioner Self-assessment coach walks the pillars; CompetencyStore owned by the practitioner
Shadow-and-record AI observes; human acts; AI builds the audit trail Culture diagnostic facilitator runs the exercise; AI aggregates and records outcomes
Gate-and-block AI cannot proceed past a checkpoint without an explicit human signal Hiring scorecard prep cannot export candidate rankings; manager must author the decision

8.3 Value Model Roles

Role Band Primary pattern HITL safeguard
Executive sponsor AI-strong Draft-and-review Sponsor owns OKR wording; quarterly review required
Strategic planner AI-strong Draft-and-review Planner selects; no option auto-adopted
Investment prioritizer (portfolio committee) AI-proposer Propose-and-confirm Committee decides; AI ranking is input, not verdict
Business case author (architect) AI-strong Draft-and-review Architect signs; sponsor countersigns
PMO analyst AI-strong Assemble-and-present PMO runs the cycle; AI output is agenda material
Benefit owner AI-strong Monitor-and-alert Owner decides continue, restructure, or decommit
Value stream owner AI-strong Monitor-and-alert Owner decides interventions
Capability owner AI-strong Draft-and-review Owner funds and sequences
Tech-debt steward AI-strong Assemble-and-present Steward approves; portfolio committee funds
RVM monitor AI-strong Monitor-and-alert Monitor and benefit owner decide the response
Principle author (practice) AI-strong Draft-and-review Workshop group adopts
Risk owner AI-strong Draft-and-review Owner decides accept, transfer, or mitigate
Scorecard owner AI-strong Draft-and-review Owner signs off on scorecard revisions
Objectives cascader AI-strong Draft-and-review Team leads commit

Stop-line. AI never allocates funding, approves a business case, or signs OKRs on behalf of a human.

8.4 People Model Roles

Role Band Primary pattern HITL safeguard
Chief Architect AI-strong Draft-and-review Communications reviewed before sending
Distinguished Architect AI-strong Draft-and-review Architect owns every output
Professional Architect AI-strong Draft-and-review Architect retains decision rights
Associate Architect AI-strong Coach-and-capture Mentor approves progression
Foundation Architect AI-strong Coach-and-capture Mentor oversees progression
Aspiring Architect AI-strong Coach-and-capture Individual-led; no organizational decisions
BIISS specialist (any of five) AI-strong Draft-and-review Practitioner owns outputs
Mentor AI-backstage Assemble-and-present Mentor fully owns evaluation; AI never judges work products
Mentee AI-strong Coach-and-capture Relationship with mentor remains primary
Community of Practice lead AI-strong Draft-and-review Lead decides direction and cadence
Engagement Model Steering Committee member AI-proposer Assemble-and-present Committee decides
Line manager AI-backstage Assemble-and-present Manager owns every people decision
Extended Team member AI-strong Coach-and-capture Individual chooses engagement
External architect (vendor or SI) AI-strong Draft-and-review Engagement manager owns the relationship
HR representative AI-backstage Assemble-and-present HR owns every decision
Hiring manager AI-backstage Gate-and-block Manager decides; AI never ranks candidates
Culture assessment facilitator AI-strong Shadow-and-record Facilitator interprets; team leaders act
Rotation coordinator AI-proposer Propose-and-confirm Coordinator proposes; manager approves
Mentoring matchmaker (practice) AI-proposer Propose-and-confirm Humans confirm every pairing

Stop-lines. AI never rates performance, judges mentor work products, ranks hiring candidates, or recommends compensation actions.

8.5 Competency Model Roles

Role Band Primary pattern HITL safeguard
Self-assessor (the practitioner) AI-strong Coach-and-capture Practitioner sets every rating; data in CompetencyStore
Peer assessor AI-strong Coach-and-capture Peer sets ratings; results not auto-shared
Mentor-assessor AI-backstage Assemble-and-present Mentor performs the assessment; AI surfaces evidence
Certification authority (CITA board) AI-excluded Out of platform scope
Certification path planner (self) AI-strong Draft-and-review Practitioner adopts the plan
Hiring manager (using competency data) AI-backstage Gate-and-block Manager decides; PrivacyGate authorizes reads
HR representative AI-backstage Assemble-and-present HR systems own decisions
Learning and Development manager AI-strong Assemble-and-present L&D designs and delivers
Team capability reviewer AI-strong Assemble-and-present Small-team suppression; no identifying detail leaks
360-degree assessment orchestrator AI-proposer Assemble-and-present Practitioner owns participation; mentor owns evaluation
Competency Model Steering Committee member AI-proposer Assemble-and-present Committee decides
CITA maintenance tracker AI-strong Shadow-and-record Practitioner verifies logged hours

Stop-lines. AI never rates work products, links competency data to compensation, or grants certifications.

8.6 Platform-Wide Constraints

Three BTABoK principles constrain AI authority more strictly than technical capability would:

Conversely, BTABoK invites AI hardest in three areas:

8.7 Mapping to Delivery Waves

Each subagent in the delivery plan carries its band and primary pattern. Waves 5 and 6 reference this section for every shipped subagent. New subagents added in future waves inherit the classification discipline established here.

9. Delivery Plan

Six waves. Each is a coherent increment that could stand on its own.

Product fork. Waves 1 through 4 are SpecChat deliverables (profile, enforcement, governance mechanics, rendering). Waves 5 and 6 are ASAP deliverables and ship from the new repository. The dependency arrow points from ASAP to SpecChat: ASAP consumes SpecChat through the MCP server surface. SpecChat ships independently on its own cadence without waiting for ASAP.

9.1 Wave 1. Enforce and Inform (Engagement)

Prerequisites: MCP-Server-Integration-Design.md Phases 1 through 4 complete.

Deliverables:

Exit criterion: a spec change that violates profile rules is blocked locally and an audit record exists.

9.2 Wave 2. PR flow and governance mechanics (Engagement)

Deliverables:

Exit criterion: a PR touching specs produces a review digest; the EARB chair can retrieve the week’s packet; the bias calibrator is invoked on every decision.

9.3 Wave 3. Calendar and operation (Engagement, begin Value)

Deliverables:

Exit criterion: scheduled freshness, waiver, scorecard, and RVM early-indicator tasks run without human invocation; ADLC stage guidance available on demand.

9.4 Wave 4. Rendering and communication (Engagement). SpecChat GA exit.

Deliverables:

Exit criterion: a stakeholder who cannot read CoDL reads the rendered view and obtains the same information.

9.5 Wave 5. Knowledge and Value Model. ASAP Wave 1.

Deliverables:

Exit criterion: an architect can invoke Value Model guidance, receive IASA-grounded advice, produce sidecar artifacts, and track RVM indicators on a monthly, quarterly, and annual cadence.

9.6 Wave 6. People and Competency Models. ASAP Wave 2.

Deliverables:

Exit criterion: a practitioner can self-assess competency, plan a CITA path, run Architect Skills Gap Analysis, analyze team topology, and track role coverage across the BIISS specializations. All privacy gates active. No People or Competency data leaks into spec collections.

10. Out of Platform Scope

Explicitly not delivered:

11. Evaluation

Engagement:

  1. EARB packet preparation moves from hours to minutes.
  2. Freshness SLA compliance rises above 90 percent without human reminders.
  3. Waiver expiries are never missed.
  4. PR review latency on spec PRs drops.
  5. Decision Bias Calibrator invocation rate approaches 100 percent on logged decisions.
  6. Time from first stakeholder interview to reviewable StakeholderCard plus candidate ASRs drops by a measurable multiple.

Value:

  1. NABC business cases for PMO cycles are drafted faster with platform support.
  2. RVM early validation indicators exist for a majority of active initiatives.
  3. Technical Debt Ratio is tracked continuously rather than on ad-hoc review.
  4. Scorecard misses are surfaced proactively against MetricDefinition targets.

People:

  1. Role coverage gaps for a new collection are identified before kickoff.
  2. Architect rotation status is visible and prompts action on over-tenure.
  3. Governance-body rotations happen without manual tracking.
  4. Mentoring coverage is visible per practitioner, and gaps are surfaced.

Competency:

  1. Practitioners complete self-assessments more frequently than under manual processes.
  2. CITA progression is tracked per practitioner with planning output.
  3. Team capability data is never leaked in identifying form.
  4. Peer and mentor assessment cycles are orchestrated without losing artifacts.

Each criterion has an owning metric and a baseline established in Wave 1.

12. Open Questions

13. Source References

[R1] SpecLang Design. SpecLang-Design.md. Profile architecture and the one-profile-at-a-time decision.

[R2] Spec Type System. Spec-Type-System.md. Three-layer stratification.

[R3] SpecChat BTABOK Implementation Plan. SpecChat-BTABOK-Implementation-Plan.md. Phases and deliverables for the profile itself.

[R4] MCP Server Integration Design. MCP-Server-Integration-Design.md. Existing MCP tools and planned additions.

[R5] BTABOK Out of Scope Models. BTABOK-Out-of-Scope-Models.md. The profile boundary this platform must respect while delivering platform-layer support for all four models.

[R6] SpecChat Design Decisions Record. SpecChat-Design-Decisions-Record.md. Settled decisions including SD-ONEPROF.

[R7] SpecChat BTABOK Acronym and Term Glossary. SpecChat-BTABOK-Acronym-and-Term-Glossary.md. Canonical for SpecChat terminology.

[R7a] ASAP Acronym and Term Glossary. ASAP-Acronym-and-Term-Glossary.md. Canonical for ASAP terminology.

[R8] IASA Global. Business Technology Architecture Body of Knowledge (BTABoK). https://iasa-global.github.io/btabok/. Authoritative source for all four models.

[R9] IASA Global. Engagement Model. Pages including engagement.md, architecture_practice.md, architecture_lifecycle.md, decisions.md, governance_em.md, principles.md, stakeholders.md, roadmap.md.

[R10] IASA Global. Value Model content. Pages including objectives.md, investment_planning.md, technical_debt.md, benefits_realization.md, value_streams.md, value_methods.md, risk_methods.md.

[R11] IASA Global. People Model content. Pages including organization.md, roles.md, career.md, extended_team.md, community.md, competency.md, culture.md, mentoring.md.

[R12] IASA Global. Competency Model. https://iasa-global.github.io/btabok/competency_model_m.html. Nine pillars, 80-plus areas, five proficiency levels, four CITA certifications.

[R13] IASA Global. Structured Canvases. https://iasa-global.github.io/btabok/structured_canvases_m.html. The 75-plus canvas library with competency cross-references.

[R14] IASA Global. Topic Areas. Pages in topics/ including agile.md, ai_ml.md, cloud.md, dev_ops.md, integration.md, security.md, systems.md, plus sustainability/. Brief contextual guides by BTABoK’s own framing.