03How It Works

One screening logic, used by outreach, framework application, and technical assistance.

Because the same engine drives every employer engagement, results are consistent across staff and partners — and handoffs into TA, occupation review, and SME consultation become cleaner by default.

04Build Roadmap

240 days from logic design to public launch. Phased so the first release is validated, not rushed.

01
Days 1–60
Design the logic

Confirm intake questions, scoring, occupation categories, escalation points, and referral rules.

Key Outputs
Approved logic model & build spec.
02
Days 61–120
Build the content

Develop the keyword library, output templates, work-process language, and RTI logic.

Key Outputs
Draft content bank, scoring model, screen designs.
03
Days 121–180
Develop the prototype

Build the interface, scoring model, dashboards, and exportable outputs. SME review.

Key Outputs
Working prototype, SME-reviewed revisions.
04
Days 181–240
Validate the tool

Pilot across broad, AI-enhanced, AI-centric, and infrastructure occupations.

Key Outputs
Validated release candidate.
05
Days 241–360
Launch & integrate

Connect to technical assistance and tracking workflows. First refresh cycle.

Key Outputs
Public launch, analytics, improvement plan.
05Governance

Built for trust as much as speed.

Every classification is explainable. Employers and seekers see why a result was returned and what evidence supported it. SME review is a routing option, not an afterthought.

Refresh procedures keep the keyword library, occupation rules, and output templates current as the AI workforce shifts. The logic is versioned and auditable.

A public-facing tool for employer education, with cleaner handoffs into the work that follows.