How to use TruAI
Complete guide to every module, the data format for import/export, and how to get the most out of the tracker.
Overview
The TruAI is a governance dashboard for hospitals adopting AI in clinical settings. It tracks 10 dimensions of safe AI adoption — from tool inventory and governance approval, through staff training and patient consent, to safety incidents, workforce experience, and equity monitoring.
The application is aligned with ECRI's 2026 Top 10 Patient Safety Concerns, which identified AI diagnostics as the #1 concern. Every module maps to ECRI's action recommendations.
Navigation
The left sidebar contains links to all 10 modules, plus links to this Help page and the About page (both open in new windows). The top bar shows the current module name, Import/Export buttons, and the data source badge (Demo or Custom).
The theme toggle (☀️/🌙) in the sidebar footer switches between dark and light mode. Your preference is saved across sessions.
Importing Data
Click the "📁 Import" button in the top bar to upload a JSON file containing your organization's data. The file must contain the required data arrays (see specification below). Once uploaded, the dashboard immediately refreshes with your data and the badge changes from "Demo Data" to "Custom Data."
Exporting Data
Click the "📤 Export" button to download the current dataset (demo or custom) as a JSON file. The file is named with today's date. Use this for backups, sharing with colleagues, or audit documentation.
Data File Specification
The data file is a single JSON object with the following top-level structure:
tools — Array of AI tool objects (the AI Registry)
| Field | Type | Description |
|---|---|---|
| id | String | Unique tool identifier (e.g., "AI-001") |
| name | String | Tool display name |
| vendor | String | Vendor or manufacturer |
| dept | String | Department where deployed |
| useCase | String | Description of clinical use case |
| type | String | "Diagnostic" or "Non-Diagnostic" |
| owner | String | Responsible individual |
| stage | String | "Production", "Pilot", or "Evaluation" |
| risk | String | "High", "Medium", or "Low" |
| fda | String | "Cleared", "Exempt", "Pending", or "Not Applicable" |
| transparency | String | "Documented", "Partial", or "Opaque" |
| biasReview | String | "Completed", "Pending", or "Not Started" |
| hfReview | String | "Completed", "Pending", or "Not Started" |
| validation | String | "External", "Internal", or "None" |
| knownFailures | String | Known failure modes or limitations |
| approved | Boolean | Whether governance approval has been granted |
| approvalDate | String | Date of approval (YYYY-MM-DD) or empty |
| policyStatus | String | "Active", "Under Review", or "Not Started" |
| lastReview | String | Date of last governance review (YYYY-MM-DD) |
| trainingPct | Number | Percentage of relevant staff trained (0–100) |
| consentPct | Number | Percentage of affected patients with consent (0–100) |
| incidents | Number | Number of safety incidents reported |
| overrides | Number | Number of clinician overrides/disagreements |
incidents — Array of safety event objects
| Field | Type | Description |
|---|---|---|
| id | String | Unique incident ID (e.g., "INC-001") |
| toolId | String | References tool.id |
| date | String | Date of incident (YYYY-MM-DD) |
| type | String | Category: "False Positive", "Missed Detection", "Automation Bias", "Hallucination", "Bias-Related", "Workflow Disruption", "Alert Fatigue" |
| severity | String | "High", "Medium", or "Low" |
| dept | String | Department where event occurred |
| description | String | Narrative description of the event |
| status | String | "Open", "Under Review", or "Resolved" |
| investigation | String | "Completed", "In Progress", or "Not Started" |
| vendorNotified | Boolean | Whether the AI vendor was notified |
| correctiveAction | String | Description of corrective action taken (or empty) |
training — Array of department training records
| Field | Type | Description |
|---|---|---|
| dept | String | Department name |
| total | Number | Total staff in department |
| completed | Number | Staff who completed AI training |
| critThinking | Number | Staff who completed critical thinking assessment |
| biasEd | Number | Staff who completed bias education |
| errorReporting | Number | Staff trained on AI error reporting |
consents — Array of department consent records
| Field | Type | Description |
|---|---|---|
| dept | String | Department name |
| patients | Number | Total patients affected by AI in this department |
| disclosed | Number | Patients informed that AI is used in their care |
| consented | Number | Patients who gave informed consent |
| optedOut | Number | Patients who opted out of AI-assisted care |
equity — Array of equity performance records (one per tool with demographic data)
| Field | Type | Description |
|---|---|---|
| toolId | String | References tool.id |
| metric | String | What is being measured (e.g., "Sensitivity for PE detection") |
| overall | Number | Overall performance metric |
| white, black, hispanic, asian | Number | Performance by racial/ethnic group |
| elderly, pediatric | Number | Performance by age group |
workforce — Array of department workforce survey results (1–5 scale)
| Field | Type | Description |
|---|---|---|
| dept | String | Department name |
| satisfaction | Number | Staff satisfaction with AI tools (1–5) |
| workflowBurden | Number | Perceived workflow burden from AI (1=low, 5=high burden) |
| usability | Number | Usability rating of AI tools (1–5) |
| deskilling | Number | Concern about clinical skill erosion (1=low, 5=high concern) |
| speakUp | Number | Comfort speaking up about AI issues (1–5) |
Sample Data File
Below is a minimal sample JSON file with 2 tools, 1 incident, and 1 department each for training, consent, equity, and workforce. Copy this as a starting template.
📊 Dashboard
The landing page shows portfolio-wide KPIs: total AI tools (approved vs pending), high-risk tools count, open incidents, training completion percentage, patient disclosure rate, and overdue governance reviews. Alert banners surface when open incidents or overdue reviews exist. The ECRI Alignment Status section shows progress against ECRI's specific 2026 recommendations.
🤖 AI Registry
Central inventory of every AI tool. The summary table shows ID, name, vendor, department, type, risk level, stage, FDA status, and training completion. Below the table, each tool has an expanded detail card showing use case, owner, known failure modes, transparency rating, bias review status, human factors assessment, validation approach, and clinician override count.
🏛️ Governance
Tracks policy status, approval, last review date (flagged if >180 days overdue), risk, and assessment completion for each tool. The ECRI Governance Checklist shows progress bars for 6 key requirements. The Regulatory Tracking panel summarizes FDA clearance status and notes the current absence of federal AI regulation.
🎓 Training
Department-by-department tracking of four competency areas: AI system training, critical thinking assessment, bias education, and error reporting training. Each shows a progress bar. ECRI specifically requires all four of these training dimensions.
🩺 Clinical Use
Shows which tools are diagnostic vs. non-diagnostic, where each is deployed, and the clinician override count. High override rates may indicate poor AI fit; low override rates on diagnostic tools may indicate automation bias (clinicians deferring to AI without independent verification).
📋 Consent
Tracks patient AI disclosure and consent by department. ECRI requires that patients be informed when AI is used in their care and given the option to opt out. The disclosure rate progress bar highlights departments where patients have not been informed.
🚨 Safety Events
Incident log with AI-specific event categories: False Positive, Missed Detection, Automation Bias, Hallucination, Bias-Related, Workflow Disruption, and Alert Fatigue. Each incident tracks severity, investigation status, vendor notification, and corrective action. ECRI notes that AI-related events are often underreported because staff have difficulty attributing them to AI.
👥 Workforce
Survey data by department on a 1–5 scale across five dimensions: satisfaction, workflow burden, usability, deskilling concern, and speak-up culture. Workflow burden and deskilling use inverted color coding (high values = red). ECRI requires monitoring of staff satisfaction and fostering a just culture where staff feel safe reporting AI issues.
⚖️ Equity
Per-tool performance across 7 demographic groups (overall, White, Black, Hispanic, Asian, elderly, pediatric). Bar visualizations show the performance metric for each group, and the disparity gap (max minus min) is calculated automatically. Gaps exceeding 5 points trigger a yellow alert; gaps exceeding 10 points trigger a red alert requiring immediate investigation.
📄 Reports
Executive summary metrics and an ECRI 2026 Compliance Scorecard with progress bars across 8 dimensions. Report templates are listed for: Executive Summary, ECRI Alignment Report, Incident Report, Training Report, Equity Report, and Audit Documentation.
FAQ
Need help?
Questions about setup, data formatting, or deployment?
alain@heailtcaire.ai