Click here to schedule a demo with a client advisor to learn more about CliniScripts
Effective Date: July 1, 2025
Version: 1.0
At CliniScripts Inc. (“CliniScripts”, “we”, “our”, or “us”), we are committed to the ethical and responsible deployment of Artificial Intelligence (“AI”) technologies. This AI Safety & Ethics Policy outlines how we design, monitor, and govern our AI systems to ensure they remain aligned with clinical values, legal frameworks, and user trust.
CliniScripts provides AI-powered scribe services to clinicians, particularly in sensitive sectors such as mental health. Our goal is to enhance—not replace—clinical workflows, while upholding the highest standards of transparency, privacy, safety, and ethical responsibility.
|
Area |
Safety Control |
|
Model Selection |
Only deploy alignment-vetted models (e.g., GPT-4, Claude, fine-tuned LLaMA). |
|
Prompt Filtering |
Middleware filters for prompt injection, toxicity, or manipulation. |
|
Session Isolation |
Every session is sandboxed; logs stored securely for compliance. |
|
Red Teaming |
Monthly internal simulations to test for hallucination or evasion. |
|
Audit Logging |
All model interactions are logged with session-level metadata. |
|
Human Oversight |
Clinicians review and approve all AI outputs before EMR integration. |
In the rare event of unexpected or unsafe AI behavior:
Our AI systems are designed in accordance with:
We reserve the right to update this AI Safety & Ethics Policy at any time. Any changes will be reflected on this page and, where appropriate, communicated to our users.
If you have questions, concerns, or complaints about CliniScripts’ AI practices or safety protocols, please contact:
Compare CliniScripts
Locations
Our website is compliant with the Accessibility for Ontarians with Disabilities Act (AODA). If you have any suggestions for improvement, please contact us.