

Provo, Utah - May 1, 2025
By Henry O’Connell, CEO and co-founder, Canary Speech
The buzz around artificial intelligence solutions reached a fever pitch in March when OpenAI closed a record-breaking $40 billion funding round — the largest private tech deal to date. It’s a clear sign that global leaders are pouring major resources into AI — China recently announced its own multi-billion dollar investment fund — despite lacking a clear roadmap for its long-term direction.
While major investors contemplate how to capitalize on AI technologies, how should entrepreneurs forge a unique path to bringing their solutions to market? The answer begins with the basics. Even small companies can take essential steps by investing in cybersecurity, emphasizing data privacy, and placing ethical uses of AI at the center of any business partnership.
At the ground level, where artificial intelligence technologies integrate with behavioral, cognitive, and other neurological healthcare platforms in clinical settings, tech innovators are collaborating on a careful approach to the most pressing ethical concerns facing patients and providers.
B2B Audits
Healthcare industry partners using AI must assess each other’s protocols with a keen eye toward the ethical handling of potentially personal information. Clinical providers and the third-party firms they contract with must agree on standards that minimize the risk of AI-driven technology being misused.
Regardless of AI use, healthcare companies must comply with HIPAA requirements to remain reliable partners. Similarly, industry leaders are quick to promote “responsible use” of AI in internal systems — but how can contracting organizations and clients verify those claims?
A reasonable framework would include three audits: Partner A completes a self-audit; Partner B audits Partner A; and an independent third party, Partner C, audits both A and B. While scopes may overlap, this tri-level audit structure represents the minimum level of scrutiny Fortune 500 companies now expect in healthcare partnerships.
De-identifying data is essential whenever patient data-sharing is involved. An anonymous ID should be used to transmit data between a business and a clinician, with re-identification occurring only behind the provider's firewall. This ensures third-party companies never learn the patient’s identity.
Beyond Audits: The Role of Third-Party Certifications
Many industries rely on independent certifications — unaffiliated with government agencies — to demonstrate transparency and build trust. A company may be HIPAA-compliant yet still fail an IT risk assessment. HIPAA sets minimum standards for privacy and security, but broader evaluations address a wider range of vulnerabilities, such as:
- Third-party access controls (e.g., external API integrations)
- Endpoint security (e.g., securing mobile devices used in telehealth)
- AI model security (e.g., guarding against adversarial attacks and data poisoning)
- Incident response readiness (e.g., speed of breach detection and mitigation)
For mental healthcare organizations considering AI vendors, it's crucial to go beyond HIPAA and demand evidence of comprehensive IT security assessments.
The threat of cyberattacks against the healthcare sector is escalating. In 2024 alone, there were 13 data breaches involving more than 1 million healthcare records, including the largest healthcare data breach of all time, which affected an estimated 100 million individuals. Penetration tests (pen-tests) conducted by ethical hackers can provide crucial insight into security weaknesses.
System and Organization Controls 2 (SOC 2) is among five key standards used to assess privacy, security, and administrative safeguards. Because SOC 2 controls closely align with HIPAA requirements, it is particularly relevant to the healthcare industry. SOC 2 Type 1 verifies that controls have been implemented, while SOC 2 Type 2 confirms those controls are operating effectively.
Other important certifications include ISO 27001 for risk-based information security management, HITRUST for comprehensive integration of HIPAA, NIST, and other compliance standards, and ISO 42001 for responsible AI practices. Unlike SOC 2, which focuses on reporting, ISO 27001 emphasizes continual improvement and governance. HITRUST, widely adopted by healthcare enterprises and SaaS providers, signals a high level of security assurance for sensitive health data.
These safeguards support multinational organizations that operate independently of national governments yet must deliver consistently secure products. Certification has become an essential signal to prospective industry partners that mental healthcare data will be protected to the highest standards.
Closing the AI Governance Gap
With CIOs reportedly under pressure from their boards to adopt new technologies, there's growing concern about an “AI governance gap”: a pace of innovation that outstrips regulation, potentially slowing adoption of AI in mental health settings.
Against this backdrop, industry leaders are taking a cautious approach to AI implementation — one that emphasizes transparency between partner organizations and prioritizes patient-centric outcomes. While AI holds vast potential to transform mental healthcare, its implementation must be vigilant and deliberate to ensure the safety and privacy of patient data.

Henry O’Connell is the CEO and co-founder of Canary Speech, a Provo, Utah-based AI-powered health tech company using real-time vocal biomarkers to screen for mental health and neurological disorders. O’Connell has more than twenty years of experience in technology company leadership, both private and public. He has served on the board of directors for several technology companies in the U.S. and internationally. Among his medical diagnostic and technology experience, O’Connell worked for Hewlett-Packard, Gibson, and the National Institutes of Health in neurological research.
