Preppr.ai Responsible AI Use Policy

Last updated: July 11, 2025

1. Purpose and Scope

This policy establishes guidelines for the responsible development, deployment, and use of artificial intelligence systems within Preparedness Innovations, Inc. (dba Preppr.ai). It applies to all employees, contractors, and third-party integrations involving AI technologies in the Preppr.ai platform.

2. Policy Statement

Preppr.ai is committed to deploying AI systems that are reliable, transparent, ethical, and aligned with the critical nature of emergency preparedness work. Our AI implementations shall prioritize public safety, human oversight, and the trust requirements of our government, healthcare, and enterprise clients.

3. Responsible AI Principles

3.1 Fairness and Non-Discrimination

  • AI outputs should not perpetuate bias or discrimination against any individual or group

  • Platform-level controls shall mitigate inappropriate responses for emergency preparedness contexts

  • Work toward implementing comprehensive bias assessment tools as they become available and mature

  • Collaborate with AI providers to improve bias detection and mitigation in underlying models

3.2 Transparency and Explainability

  • Full Technical Disclosure: Complete transparency regarding technology stack, AI models, and third-party providers

  • Source Attribution: Open-source intelligence reports shall include specific citations from monitored sources where technically feasible

  • AI Attribution: Work toward clear identification of AI-generated content as industry standards and technical capabilities mature

  • Capability Communication: Clear documentation of AI system capabilities and limitations

  • Explainability Goals: Strive for improved AI decision transparency as interpretability tools and techniques advance

3.3 Accountability and Human Oversight

  • Stepped Workflows: Human review and validation required at critical decision points

  • Human Authority: AI augments rather than replaces human expertise in emergency preparedness

  • Final Approval: Human approval required before finalizing exercise plans or critical documentation

  • Override Capability: Users maintain ability to modify, correct, or reject AI recommendations

3.4 Privacy and Data Protection

  • Customer data encrypted at rest (AES-256) and in transit (TLS 1.2+)

  • Data access limited to authorized, background-checked U.S.-based personnel

  • Customer data not used for AI model training

  • Strict data residency requirements (U.S. regions only)

  • Clear data retention and deletion policies

3.5 Safety and Reliability

  • Platform-Level Controls: Proprietary safeguards for unwanted behavior and harmful content

  • Security Measures: Protection against prompt injection and system manipulation

  • Professional Standards: Output quality controls ensuring appropriateness for emergency preparedness

  • Continuous Monitoring: System performance and security monitoring via established SOC 2 controls

3.6 Beneficence and Public Good

  • AI applications designed to improve emergency preparedness and public safety

  • Focus on augmenting human capabilities in critical disaster response scenarios

  • Commitment to serving the public interest through enhanced emergency preparedness

4. AI System Governance

4.1 Third-Party AI Model Management

  • Approved Providers: OpenAI, Anthropic, Google, DeepGram, and other established AI services that demonstrate commitment to responsible AI practices

  • Provider Evaluation: Regular assessment of third-party AI provider responsible AI practices and transparency reports

  • Model Updates: User notification of significant model changes via email and in-app notifications, with goal of providing advance notice as provider capabilities allow

  • Service Monitoring: Tracking of provider SLA compliance and performance metrics

  • Industry Collaboration: Work with providers to improve transparency and responsible AI implementation across the ecosystem

4.2 Quality Assurance and Testing

  • Integration Testing: Comprehensive testing of AI service integrations

  • Security Testing: Regular vulnerability scans and penetration testing

  • Accessibility Testing: Automated Lighthouse accessibility audits integrated into CI/CD pipeline

  • User Experience Validation: Workflow testing to ensure human-AI collaboration effectiveness

4.3 Risk Assessment and Mitigation

  • Annual Risk Assessment: Comprehensive evaluation of AI-related risks

  • Incident Response: Documented procedures for AI-related security or performance incidents

  • Business Continuity: Disaster recovery planning including AI service disruption scenarios

  • Vendor Risk Management: Regular evaluation of third-party AI provider risks

5. Human-AI Collaboration Standards

5.1 Workflow Design

  • AI systems shall provide suggestions and recommendations, not autonomous decisions

  • Critical outputs require human review before implementation

  • Users maintain control over final exercise designs and documentation

  • Clear delineation between AI-generated content and human decisions

5.2 User Training and Support

  • Employee training on responsible AI use and limitations

  • User education on AI capabilities and appropriate use cases

  • Clear documentation of AI system boundaries and constraints

  • Support channels for AI-related questions and issues

5.3 Feedback and Continuous Improvement

  • Built-in feedback mechanisms throughout platform interface

  • User correction capabilities within stepped workflows

  • Regular collection and analysis of user feedback on AI performance

  • Iterative improvement based on user experience and domain expertise

6. Data and Privacy Governance

6.1 Data Handling Principles

  • Data Minimization: Collect only data necessary for AI functionality

  • Purpose Limitation: Use data only for stated emergency preparedness purposes

  • Retention Limits: Data retained only as long as necessary for active subscriptions

  • Secure Deletion: Automatic data deletion upon 365 days from subscription suspension, earlier if requested

6.2 Third-Party Data Sharing

  • Customer data not used for training third-party AI models

  • Temporary data retention by AI providers (typically 30 days maximum)

  • Clear agreements with AI providers regarding data handling and deletion

  • Regular audit of third-party data handling practices

7. Monitoring and Compliance

7.1 Performance Monitoring

  • System availability and response time tracking

  • AI service integration success rates

  • User workflow completion metrics

  • Security incident detection and response

7.2 Compliance Framework

  • SOC 2 Type 2 compliance maintenance

  • Regular internal audits of AI governance practices

  • Alignment with GovAI Coalition standards and templates

  • Adherence to client-specific compliance requirements

7.3 Issue Reporting and Resolution

  • Security Incidents: Immediate reporting to IT Manager

  • Ethics Concerns: Anonymous reporting via Google Form

  • General Issues: Built-in feedback widgets and standard support channels

  • Escalation Procedures: Clear pathways for critical AI-related concerns

8. Governance Structure

8.1 Advisory Group

Preppr.ai maintains an Advisory Group that includes:

  • AI Policy and Ethics Expert (currently filled): Provides expertise on responsible AI practices, ethical considerations, and policy compliance

  • Domain Representatives: Emergency preparedness and disaster response professionals

  • Researchers: Academic and industry experts in AI, emergency management, and related fields

  • Technical Leadership: CTO and senior engineering staff

  • Legal and Compliance: Representatives ensuring regulatory compliance

8.2 Roles and Responsibilities

  • Executive Leadership: Overall responsible AI strategy and resource allocation

  • AI Policy and Ethics Representative: Guidance on ethical AI practices and policy alignment

  • Product Development: Implementation of AI safety controls and user experience design

  • Security Team: AI security monitoring and incident response

  • Quality Assurance: Testing and validation of AI system performance

8.3 Review and Updates

  • Annual Policy Review: Comprehensive assessment and update of responsible AI policy

  • Quarterly Advisory Meetings: Regular AI Policy and Ethics Advisory Group sessions

  • Continuous Monitoring: Ongoing assessment of AI system performance and user feedback

  • Industry Alignment: Regular review of evolving responsible AI standards and best practices

9. Training and Awareness

9.1 Employee Training

  • Mandatory responsible AI training for all personnel

  • Role-specific training for staff working directly with AI systems

  • Regular updates on responsible AI practices and policy changes

  • Security awareness training including AI-specific threats

9.2 User Education

  • Clear documentation of AI capabilities and limitations in user agreements

  • Educational resources on effective and responsible AI system use

  • Regular communication about AI improvements and changes

  • Best practices guidance for emergency preparedness professionals

10. Incident Response

10.1 AI-Related Incidents

  • Definition: Any AI system malfunction, security breach, or ethical concern

  • Reporting: Immediate notification to designated incident response team

  • Assessment: Rapid evaluation of incident scope and potential impact

  • Response: Implementation of containment and remediation measures

  • Communication: Appropriate notification to affected users and stakeholders

10.2 Continuous Improvement

  • Post-incident analysis and lessons learned documentation

  • Policy and procedure updates based on incident findings

  • Sharing of insights with AI provider partners where appropriate

  • Integration of lessons learned into training and awareness programs

11. External Engagement

11.1 Industry Collaboration

  • Active participation in responsible AI industry discussions

  • Collaboration with GovAI Coalition on public sector AI standards

  • Engagement with AI provider partners on safety and ethics improvements

  • Contribution to responsible AI best practices development

11.2 Client Communication

  • Transparent communication of AI capabilities in service agreements

  • Regular updates on AI governance practices and improvements

  • Educational resources for client organizations on AI use in emergency preparedness

  • Clear escalation procedures for client AI-related concerns

12. Policy Enforcement

Violations of this policy may result in disciplinary action up to and including termination of employment or contract. All personnel are expected to report suspected violations through established reporting channels.

Do We Make Updates to This Policy?

This policy shall be reviewed annually or as needed based on technological developments, regulatory changes, or lessons learned from operational experience. The latest version will be indicated by the Last Updated date and will be effective immediately.

How Can You Contact Us About This Policy?

If you have any questions, comments, or concerns about this policy, please contact us by email at connect@preppr.ai or by mail at:

Preparedness Innovations, Inc.
232 Stagecoach Blvd
Evergreen, CO 80439

United States




Take control of your
disaster preparedness.

Disaster preparedness isn’t working as well it should—and we need it to evolve. Preppr is tackling this, starting with how we design and conduct disaster exercises.

© 2025 Preparedness Innovations, Inc. All rights reserved.

9878 W Belleview Ave #5053, Denver, CO 80123

.

Take control of your disaster preparedness.

Disaster preparedness isn’t working as well it should—and we need it to evolve. Preppr is tackling this, starting with how we design and conduct disaster exercises.

© 2025 Preparedness Innovations, Inc. All rights reserved

9878 W Belleview Ave #5053
Denver, CO 80123