Operation
Business Continuity & Disaster Recovery
Business Continuity & Disaster Recovery
GDPR
Compliant
DORA
Compliant
ISO 27001
Q1 2026
Key security principles at Onesurance
End-to-end encryption for all data in transit and at rest
Multi-factor authentication required for all users
24/7 security monitoring with automated incident detection
Data residency within the EU (Azure West Europe)
ISO 27001 certification in preparation (target Q1 2026)
End-to-end encryption for all data in transit and at rest
Certifications and Standards
Onesurance to the highest security and compliance standards in the financial sector. Below you will find an overview of our current certifications and roadmap.
Certification
Description
Status
TRUST CENTER - BUSINESS CONTINUITY & DISASTER RECOVERY
Onesurance Continuity & Disaster Recovery
Last updated: December 2024
Our Commitment
At Onesurance , we Onesurance that our customers in the insurance sector rely on the availability of our platform for their daily operations. That is why we have comprehensive plans and procedures in place to ensure the continuity of our services, even in the event of unexpected events. Our approach combines preventive measures, proactive planning, and tested recovery procedures.
Service Level Agreement (SLA).
Uptime Commitment
• Uptime: 99.9% per month (maximum 43 minutes downtime per month)
• Calculation: (Total minutes - downtime) / Total minutes × 100%
• Measurement: Automated external monitoring via third-party tools
• Reporting: Monthly SLA reports available via customer portal
• Service credits: If SLA is not met, according to contract
Exclusions
• Planned maintenance: Announced at least 7 days in advance
• Force majeure: Beyond our reasonable control
• Customer-induced issues: Configuration errors, overload by customer
• Third-party service failures: Beyond Azure core services
• Beta/experimental features: Explicitly marked
Planned Maintenance
• Frequency: Maximum 1× per month
• Duration: Maximum 4 hours
• Schedule: Outside office hours (8:00 p.m. - 6:00 a.m. CET)
• Notification: Minimum 7 days in advance, via email and status page
• Zero downtime where possible: Use of blue-green deployments
Support & Response Times
P1 - Critical (Service Down)
• Initial response: 15 minutes
• Updates: Every hour during incident
• Resolution target: 4 hours
• Escalation: Automatic to management
P2 - High (Significant Degradation)
• Initial response: 1 hour
• Updates: Every 4 hours
• Resolution target: 8 hours (business hours)
• Escalation: After 4 hours
P3 - Medium (Minor Impact)
• Initial response: 4 business hours
• Updates: Daily
• Resolution target: 2 business days
• Escalation: In case of delays
P4 - Low (Question / Enhancement)
• Initial response: 1 business day
• Updates: Upon request
• Resolution target: 5 business days
• Escalation: Not required
Business Continuity Plan (BCP)
Scope
The BCP covers:
• IT infrastructure and services
• Application availability
• Data integrity and recovery
• Personnel continuity
• Supplier dependencies
• Customer communication
• Regulatory compliance
Not within scope (separate plans):
• Physical office recovery (work from home possible)
• Financial continuity (CFO responsibility)
• Legal continuity (Legal team)
Critical Business Processes
Tier 1 - Mission Critical (RTO: 4 hours, RPO: 5 min)
• Core platform availability
• API endpoints
• Database access
• Authentication and authorization
• Risk Engine processing
• Customer dashboards
Tier 2 - Important (RTO: 8 hours, RPO: 1 hour)
• Reporting and analytics
• Admin portals
• Batch processing
• Email notifications
• Integration webhooks
Tier 3 - Normal (RTO: 24 hours, RPO: 24 hours)
• Documentation portals
• Training materials
• Non-critical background jobs
• Marketing website
Business Impact Analysis (BIA)
Performed: Annually and when significant changes occur
Last assessment: [Date]
Impact Categories:
• Financial: Revenue loss, contractual penalties
• Operational: Inability to deliver services
• Reputational: Customer trust, brand damage
• Regulatory: Non-compliance, fines
• Contractual: SLA breaches
Acceptable Outage Windows:
• Tier 1: Maximum 4 hours before significant impact
• Tier 2: Maximum 8 hours
• Tier 3: Maximum 24 hours
Dependencies Identified:
• Microsoft Azure: Primary dependency
• Internet connectivity: Multiple ISPs
• Personnel: Key person dependencies mitigated
• Third-party services: Assessed and documented
Disaster Recovery (DR) Strategy
Recovery Objectives
RTO (Recovery Time Objective): 4 hours
• Definition: Maximum acceptable time to restore services
• Scope: Tier 1 (mission-critical) services
• Measurement: From incident start to full functionality
• Target: 4 hours for 99% of scenarios
RPO (Recovery Point Objective): 5 minutes
• Definition: Maximum acceptable data loss
• Scope: All customer data
• Measurement: Point-in-time of last successful backup
• Implementation: Continuous database replication
DR Architecture
Primary Site
• Region: Azure West Europe (Amsterdam, NL)
• Availability Zones: 3 zones for redundancy
• Configuration: Active multi-zone deployment
DR Site
• Region: Azure North-Europe (Ireland)
• Status: Warm standby (pre-provisioned, not active)
• Data replication: Real-time for databases
• Configuration sync: Automated via IaC
Failover Approach
• Databases: Automated failover groups (within EU)
• Applications: Manual failover after assessment
• DNS: Automated failover via Azure Traffic Manager
• Monitoring: Continues at DR site
DR Scenarios
Scenario 1: Availability Zone Failure
• Likelihood: Low (maar voorbereid)
• Detection: Automated health checks
• Response: Automatische failover naar andere zones
• Impact: Geen of minimaal (seconden)
• RTO: <5 minuten
• RPO: 0 (real-time replication)
Scenario 2: Regional Failure
• Likelihood: Very low (but prepared)
• Detection: Monitoring, Azure status, alerts
• Response: Manual failover to DR region after assessment
• Impact: Moderate (service interruption)
• RTO: 4 hours (includes assessment and coordination)
• RPO: 5 minutes
Scenario 3: Data Corruption/Deletion
• Likelihood: Low (access controls, backups)
• Detection: Monitoring, user reports, data validation
• Response: Point-in-time restore from backups
• Impact: Varies depending on affected scope
• RTO: 1-4 hours (depends on data volume)
• RPO: 5 minutes
Scenario 4: Ransomware/Malware
• Likelihood: Low (prevention controls)
• Detection: Antivirus, anomaly detection, monitoring
• Response: Isolate, eradicate, restore from clean backups
• Impact: Potentially high (if widespread)
• RTO: 4-8 hours (includes forensics, verification)
• RPO: 5 minutes (immutable backups)
Scenario 5: Major Azure Global Outage
• Likelihood: Very rare (but impossible to rule out)
• Detection: Azure status dashboard, Microsoft communication
• Response: Escalate to Microsoft, wait for resolution
• Impact: High (unable to failover within Azure)
• RTO: Depends on Microsoft resolution
• Contingency: Communication plan, alternative work
Backup Strategy
Database Backups
• Type: Automated Azure SQL Database backups
• Frequency: Continuous (point-in-time restore every 5 minutes)
• Retention:
Short-term: 35 days point-in-time restore
Long-term: Weekly backups for 10 years
• Storage: Geo-redundant (replicated within the EU)
• Encryption: AES-256 for all backups
• Testing: Monthly restore tests
Application & Configuration
• Type: Infrastructure as Code (Terraform)
• Frequency: Upon every change (version controlled)
• Storage: Git repository with branching strategy
• Encryption: Repository level encryption
• Testing: Automated deployment tests in staging
File Storage (Blob Storage)
• Type: Geo-redundant storage (GRS)
• Frequency: Real-time replication
• Versioning: Enabled (30-day version history)
• Soft delete: 30-day retention
• Immutable storage: For compliance-critical data
Logs & Audit Data
• Type: Azure Log Analytics long-term retention
• Frequency: Continuous ingestion
• Retention: 1 year online, 7 years archived
• Storage: Immutable, encrypted
• Testing: Quarterly restore verification
Backup Testing
• Frequency: Monthly automated restore tests
• Scope: Sample databases, recent backups
• Validation: Data integrity checks, completeness
• Full DR test: Semi-annual (see DR Testing)
• Documentation: Test results logged and reviewed
DR Testing
Test Frequency & Types
Monthly: Backup Restore Tests
• Scope: Sample backup restore
• Environment: Isolated test environment
• Validation: Data integrity, completeness
• Duration: 2-4 hours
• Pass criteria: Successful restore without errors
Quarterly: Failover Tests
• Scope: Database failover to DR region
• Environment: Non-production replication
• Validation: Failover success, data consistency
• Duration: 4 hours
• Pass criteria: RTO/RPO within targets
Semi-Annual: Full DR Simulation
• Scope: Complete platform failover, all tiers
• Environment: Controlled test with select customers
• Teams: All IR/DR teams, management
• Duration: 8 hours (full day exercise)
• Validation:
All services restored
Data integrity confirmed
Communication protocols followed
RTO/RPO targets with
• Documentation: Comprehensive report
• Improvements: Action items addressed
Annual: Disaster Scenario Exercise
• Scope: Realistic, complex disaster (tabletop + technical)
• Participants: Full company, key stakeholders
• Scenarios: Varied (ransomware, regional failure, etc.)
• Duration: Full day
• Outcomes:
Team preparedness assessed
Gaps identified
Plan improvements
Training requirements
Test Documentation
• Test plan: Objectives, scope, success criteria
• Test execution: Step-by-step log
• Results: Pass/fail per component
• Issues: Problems encountered and resolutions
• Improvements: Recommendations for plan updates
• Sign-off: Management approval
Personnel Continuity
Key Person Dependencies
• Identified: All critical roles mapped
• Mitigation: Cross-training, documentation
• Backup: Secondary person per critical role
• Knowledge transfer: Regular rotation
On-Call Rotation
• Coverage: 24/7 for critical systems
• Rotation: Weekly rotation between engineers
• Escalation: Clear escalation paths
• Compensation: On-call allowance + overtime
Remote Work Capability
• Infrastructure: Cloud-based, accessible globally
• VPN: Available for secure remote access
• Equipment: Laptops provided, BYOD policy
• Testing: Regular remote work exercises
• Policy: Work-from-home as default for DR
Succession Planning
• Key roles: Backup identified for management
• Documentation: Role responsibilities documented
• Transitions: Smooth handover procedures
• Authority: Delegated decision-making in DR
Supplier & Third-Party Continuity
Critical Suppliers
Microsoft Azure
• Risk: Primary infrastructure provider
• SLA: 99.99% with financial penalties
• DR: Multi-region, availability zones
• Monitoring: Azure status dashboard
• Escalation: Premier support contract
• Alternative: Not feasible (full migration = months)
• Mitigation: Multi-zone deployment, DR region
Bonsai Software (Development Partner)
• Risk: Development capacity dependency
• SLA: Best-effort support
• DR: Not critical (internal tooling)
• Mitigation: Knowledge documentation, code ownership
• Alternative: Internal team can maintain
Supplier Assessment
• Frequency: Annual supplier risk assessments
• Criteria: Financial stability, BC plans, security
• Documentation: Supplier BC plans on file
• Contracts: BC requirements in agreements
• Monitoring: Ongoing performance tracking
Communication Plan
Stakeholder Communication
Internal Team
• Channel: Microsoft Teams dedicated channel
• Frequency: Real-time updates during DR event
• Content: Status, actions, next steps
• Responsibility: DR Coordinator
Management/Executive
• Channel: Direct call, email, Teams
• Frequency: Every hour (P1), every 4 hours (P2)
• Content: Impact, progress, decisions needed
• Responsibility: CTO / DR Manager
Customers
• Channel: Email, status page, in-app notifications
• Frequency: Initial notification, then hourly updates
• Content:
What's going on
Which services are affected?
Expected recovery time
Alternatives (if available)
Next update timing
• Responsibility: Customer Success + DR Manager
Regulatory Authorities
• Trigger: Significant incidents with compliance impact
• Timeline: Per regulatory requirements
GDPR breach: Within 72 hours
DORA: Per incident classification
• Content: Incident details, impact, remediation
• Responsibility: FG / Compliance Officer
Media/Public (if necessary)
• Trigger: High-profile incidents, media inquiries
• Approval: Executive team
• Spokesperson: Designated representative
• Message: Consistent with customer communication
Status Page
• URL: onesurance
• Updates: Real-time status, incident history
• Subscriptions: Email/SMS notifications available
• Transparency: Public visibility of uptime
Templates
• Pre-approved templates for common scenarios
• Tone: Professional, transparent, empathetic
• Languages: Dutch and English
• Review: Legal approval for critical communications
Compliance & Regulatory
Regulatory Requirements
DORA (Digital Operational Resilience Act)
• Applies to: Insurance sector clients
• Requirements:
ICT risk management framework
Incident reporting (24 hours for major incidents)
Resilience testing (annual)
Third-party risk monitoring
• Compliance: BC/DR plan aligned with DORA
• Testing: Annual DORA-specific resilience test
• Reporting: Incident classification and notification
GDPR/AVG
• Requirement: Data availability and integrity
• Backups: Encrypted, tested, retention compliant
• Data breach: Notification procedures (see Template 06)
• Impact assessments: DPIA for BC/DR processes
Contractual Obligations
• SLA commitments: 99.9% uptime
• Data retention: Per contract specifics
• Notification: Customer notification in case of incidents
• Testing: Right to witness DR tests
Documentation & Records
BC/DR Documentation
• Business Continuity Plan: This document
• Disaster Recovery Procedures: Technical runbooks
• Test results: All tests logged
• Incident reports: Post-incident analyses
• Risk assessments: Annual BIA
• Supplier assessments: Third-party evaluations
Record Retention
• Plans: Current version + 3 years history
• Tests: 7 years
• Incidents: 7 years
• Assessments: 7 years
• Availability: For auditors, regulators on request
Maintenance Plan
Update Triggers
• Annually: Scheduled comprehensive review
• Significant changes: New services, infrastructure, personnel
• Post-incident: After major incidents
• Post-test: After DR tests with identified gaps
• Regulatory changes: New compliance requirements
• Supplier changes: New/changed dependencies
Review Process
Draft updates (DR Manager)
Technical review (Engineering team)
Compliance review (FG/Legal)
Management approval
Communication to stakeholders
Training on changes
Version control and distribution
Governance
• Owner: CTO
• Approver: Executive team
• Review cycle: Annual
• Next review: [Date]
Continuous Improvement
Performance Metrics
• Actual RTO/RPO: Measured during incidents and tests
• Test success rate: % passed DR tests
• Backup success rate: % successful backups
• Restore success rate: % successful restores
• Incident frequency: Number and severity
• Compliance: % compliant with requirements
Reporting
• Monthly: Backup and uptime reports
• Quarterly: BC/DR metrics dashboard
• Semi-annual: Full BC/DR program review
• Annual: Executive report for Board
Sources of improvement
• Post-incident reviews: Lessons learned
• Test results: Gaps identified
• Industry best practices: Benchmarking
• Regulatory feedback: Auditor findings
• Technology advances: New capabilities
Customer Responsibilities
Shared Responsibility Model
• Onesurance : Platform availability, data backups
• Customer responsible for:
User access management
Data accuracy and quality
Export and archive of own reports (if necessary)
Business continuity for own operations
Alternative workflows during outages
Customer Actions During Outages
• Monitor status page: onesurance
• Follow updates: Email notifications
• Use workarounds: If available
• Report issues: Via support channels
• Avoid: Excessive retries (may delay recovery)
Customer BC Planning
• Documentation: Maintain own BC plans
• Dependencies: Include Onesurance risk assessment
• Testing: Include Onesurance
• Contacts: Keep contact lists current
• Escalation: Know how to reach us in emergencies
Contact
For business continuity and disaster recovery:
• 24/7 Incident Hotline: onesurance
• BC/DR Planning: onesurance
• SLA Questions: onesurance
Last updated: December 2024
Onesurance .V. | Breda, Netherlands | Chamber of Commerce: 87521997