Dashboard About Help
Help Center

How to use TruAI

Complete guide to every module, the data format for import/export, and how to get the most out of the tracker.

On this page
Overview Navigation Importing Data Exporting Data Data File Specification Sample Data File
📊 Dashboard 🤖 AI Registry 🏛️ Governance 🎓 Training 🩺 Clinical Use 📋 Consent 🚨 Safety Events 👥 Workforce ⚖️ Equity 📄 Reports
FAQ

Overview

The TruAI is a governance dashboard for hospitals adopting AI in clinical settings. It tracks 10 dimensions of safe AI adoption — from tool inventory and governance approval, through staff training and patient consent, to safety incidents, workforce experience, and equity monitoring.

The application is aligned with ECRI's 2026 Top 10 Patient Safety Concerns, which identified AI diagnostics as the #1 concern. Every module maps to ECRI's action recommendations.

The left sidebar contains links to all 10 modules, plus links to this Help page and the About page (both open in new windows). The top bar shows the current module name, Import/Export buttons, and the data source badge (Demo or Custom).

The theme toggle (☀️/🌙) in the sidebar footer switches between dark and light mode. Your preference is saved across sessions.

Importing Data

Click the "📁 Import" button in the top bar to upload a JSON file containing your organization's data. The file must contain the required data arrays (see specification below). Once uploaded, the dashboard immediately refreshes with your data and the badge changes from "Demo Data" to "Custom Data."

💡 Import replaces the entire dataset. Export your current data first if you want to preserve it.

Exporting Data

Click the "📤 Export" button to download the current dataset (demo or custom) as a JSON file. The file is named with today's date. Use this for backups, sharing with colleagues, or audit documentation.

Data File Specification

The data file is a single JSON object with the following top-level structure:

{ "org": "Your Hospital Name", "tools": [...], "incidents": [...], "training": [...], "consents": [...], "equity": [...], "workforce": [...] }

tools — Array of AI tool objects (the AI Registry)

FieldTypeDescription
idStringUnique tool identifier (e.g., "AI-001")
nameStringTool display name
vendorStringVendor or manufacturer
deptStringDepartment where deployed
useCaseStringDescription of clinical use case
typeString"Diagnostic" or "Non-Diagnostic"
ownerStringResponsible individual
stageString"Production", "Pilot", or "Evaluation"
riskString"High", "Medium", or "Low"
fdaString"Cleared", "Exempt", "Pending", or "Not Applicable"
transparencyString"Documented", "Partial", or "Opaque"
biasReviewString"Completed", "Pending", or "Not Started"
hfReviewString"Completed", "Pending", or "Not Started"
validationString"External", "Internal", or "None"
knownFailuresStringKnown failure modes or limitations
approvedBooleanWhether governance approval has been granted
approvalDateStringDate of approval (YYYY-MM-DD) or empty
policyStatusString"Active", "Under Review", or "Not Started"
lastReviewStringDate of last governance review (YYYY-MM-DD)
trainingPctNumberPercentage of relevant staff trained (0–100)
consentPctNumberPercentage of affected patients with consent (0–100)
incidentsNumberNumber of safety incidents reported
overridesNumberNumber of clinician overrides/disagreements

incidents — Array of safety event objects

FieldTypeDescription
idStringUnique incident ID (e.g., "INC-001")
toolIdStringReferences tool.id
dateStringDate of incident (YYYY-MM-DD)
typeStringCategory: "False Positive", "Missed Detection", "Automation Bias", "Hallucination", "Bias-Related", "Workflow Disruption", "Alert Fatigue"
severityString"High", "Medium", or "Low"
deptStringDepartment where event occurred
descriptionStringNarrative description of the event
statusString"Open", "Under Review", or "Resolved"
investigationString"Completed", "In Progress", or "Not Started"
vendorNotifiedBooleanWhether the AI vendor was notified
correctiveActionStringDescription of corrective action taken (or empty)

training — Array of department training records

FieldTypeDescription
deptStringDepartment name
totalNumberTotal staff in department
completedNumberStaff who completed AI training
critThinkingNumberStaff who completed critical thinking assessment
biasEdNumberStaff who completed bias education
errorReportingNumberStaff trained on AI error reporting

consents — Array of department consent records

FieldTypeDescription
deptStringDepartment name
patientsNumberTotal patients affected by AI in this department
disclosedNumberPatients informed that AI is used in their care
consentedNumberPatients who gave informed consent
optedOutNumberPatients who opted out of AI-assisted care

equity — Array of equity performance records (one per tool with demographic data)

FieldTypeDescription
toolIdStringReferences tool.id
metricStringWhat is being measured (e.g., "Sensitivity for PE detection")
overallNumberOverall performance metric
white, black, hispanic, asianNumberPerformance by racial/ethnic group
elderly, pediatricNumberPerformance by age group

workforce — Array of department workforce survey results (1–5 scale)

FieldTypeDescription
deptStringDepartment name
satisfactionNumberStaff satisfaction with AI tools (1–5)
workflowBurdenNumberPerceived workflow burden from AI (1=low, 5=high burden)
usabilityNumberUsability rating of AI tools (1–5)
deskillingNumberConcern about clinical skill erosion (1=low, 5=high concern)
speakUpNumberComfort speaking up about AI issues (1–5)

Sample Data File

Below is a minimal sample JSON file with 2 tools, 1 incident, and 1 department each for training, consent, equity, and workforce. Copy this as a starting template.

{ "org": "Sample Hospital", "tools": [ { "id": "AI-001", "name": "RadAssist", "vendor": "Aidoc", "dept": "Radiology", "useCase": "CT triage for stroke", "type": "Diagnostic", "owner": "Dr. Smith", "stage": "Production", "risk": "High", "fda": "Cleared", "transparency": "Documented", "biasReview": "Completed", "hfReview": "Completed", "validation": "External", "knownFailures": "Reduced accuracy on pediatric scans", "approved": true, "approvalDate": "2025-06-15", "policyStatus": "Active", "lastReview": "2026-01-10", "trainingPct": 90, "consentPct": 85, "incidents": 1, "overrides": 2 }, { "id": "AI-002", "name": "NoteHelper", "vendor": "Nuance", "dept": "Primary Care", "useCase": "Clinical documentation", "type": "Non-Diagnostic", "owner": "Dr. Jones", "stage": "Pilot", "risk": "Low", "fda": "Not Applicable", "transparency": "Partial", "biasReview": "Not Started", "hfReview": "Pending", "validation": "None", "knownFailures": "Transcription errors with accented speech", "approved": false, "approvalDate": "", "policyStatus": "Under Review", "lastReview": "", "trainingPct": 40, "consentPct": 60, "incidents": 0, "overrides": 0 } ], "incidents": [ { "id": "INC-001", "toolId": "AI-001", "date": "2026-02-15", "type": "Automation Bias", "severity": "Medium", "dept": "Radiology", "description": "Resident deferred to AI negative finding. Attending caught subtle PE on review.", "status": "Open", "investigation": "In Progress", "vendorNotified": false, "correctiveAction": "" } ], "training": [ { "dept": "Radiology", "total": 24, "completed": 20, "critThinking": 16, "biasEd": 18, "errorReporting": 20 } ], "consents": [ { "dept": "Radiology", "patients": 1000, "disclosed": 900, "consented": 870, "optedOut": 30 } ], "equity": [ { "toolId": "AI-001", "metric": "PE detection sensitivity", "overall": 94.0, "white": 95.0, "black": 91.5, "hispanic": 93.0, "asian": 94.5, "elderly": 89.8, "pediatric": 82.0 } ], "workforce": [ { "dept": "Radiology", "satisfaction": 4.0, "workflowBurden": 2.5, "usability": 3.8, "deskilling": 2.9, "speakUp": 4.1 } ] }
💡 Save this as a .json file and upload via the Import button to test with your own data structure before entering real data.

📊 Dashboard

The landing page shows portfolio-wide KPIs: total AI tools (approved vs pending), high-risk tools count, open incidents, training completion percentage, patient disclosure rate, and overdue governance reviews. Alert banners surface when open incidents or overdue reviews exist. The ECRI Alignment Status section shows progress against ECRI's specific 2026 recommendations.

🤖 AI Registry

Central inventory of every AI tool. The summary table shows ID, name, vendor, department, type, risk level, stage, FDA status, and training completion. Below the table, each tool has an expanded detail card showing use case, owner, known failure modes, transparency rating, bias review status, human factors assessment, validation approach, and clinician override count.

🏛️ Governance

Tracks policy status, approval, last review date (flagged if >180 days overdue), risk, and assessment completion for each tool. The ECRI Governance Checklist shows progress bars for 6 key requirements. The Regulatory Tracking panel summarizes FDA clearance status and notes the current absence of federal AI regulation.

🎓 Training

Department-by-department tracking of four competency areas: AI system training, critical thinking assessment, bias education, and error reporting training. Each shows a progress bar. ECRI specifically requires all four of these training dimensions.

🩺 Clinical Use

Shows which tools are diagnostic vs. non-diagnostic, where each is deployed, and the clinician override count. High override rates may indicate poor AI fit; low override rates on diagnostic tools may indicate automation bias (clinicians deferring to AI without independent verification).

Tracks patient AI disclosure and consent by department. ECRI requires that patients be informed when AI is used in their care and given the option to opt out. The disclosure rate progress bar highlights departments where patients have not been informed.

🚨 Safety Events

Incident log with AI-specific event categories: False Positive, Missed Detection, Automation Bias, Hallucination, Bias-Related, Workflow Disruption, and Alert Fatigue. Each incident tracks severity, investigation status, vendor notification, and corrective action. ECRI notes that AI-related events are often underreported because staff have difficulty attributing them to AI.

👥 Workforce

Survey data by department on a 1–5 scale across five dimensions: satisfaction, workflow burden, usability, deskilling concern, and speak-up culture. Workflow burden and deskilling use inverted color coding (high values = red). ECRI requires monitoring of staff satisfaction and fostering a just culture where staff feel safe reporting AI issues.

⚖️ Equity

Per-tool performance across 7 demographic groups (overall, White, Black, Hispanic, Asian, elderly, pediatric). Bar visualizations show the performance metric for each group, and the disparity gap (max minus min) is calculated automatically. Gaps exceeding 5 points trigger a yellow alert; gaps exceeding 10 points trigger a red alert requiring immediate investigation.

📄 Reports

Executive summary metrics and an ECRI 2026 Compliance Scorecard with progress bars across 8 dimensions. Report templates are listed for: Executive Summary, ECRI Alignment Report, Incident Report, Training Report, Equity Report, and Audit Documentation.

FAQ

Is this real hospital data?
No. The demo uses simulated but realistic data. Upload your own data via the Import button to use real numbers.
What file format does Import accept?
JSON only. The file must contain the required arrays: tools, incidents, training, consents. The equity and workforce arrays are optional but recommended. See the Data Specification section above.
Is my data stored anywhere?
No. The application runs entirely in your browser. No data is sent to any server. Use Export to save your data as a local file.
Can multiple people use this simultaneously?
In this version, the app is single-user and browser-based. For multi-user deployment with shared data, contact us about the hosted version.
How often should governance reviews be conducted?
ECRI recommends regular oversight. The tracker flags tools whose last governance review was more than 180 days ago. For high-risk diagnostic tools, quarterly reviews are recommended.
What is the ECRI Compliance Scorecard?
It measures your organization's alignment with ECRI's 2026 AI recommendations across 8 dimensions: policies, training, critical thinking, patient disclosure, human factors reviews, bias assessments, algorithm transparency, and incident investigation.

Need help?

Questions about setup, data formatting, or deployment?

alain@heailtcaire.ai