Zoek in de documentatie…

Operatie

Bedrijfscontinuïteit & Disaster Recovery

Bedrijfscontinuïteit & Disaster Recovery

GDPR

Compliant

DORA

Compliant

ISO 27001

Q1 2026

Belangrijkste beveligingsprincipes bij Onesurance

End-to end enryptie voor alle data in transit en at rest

Multi-factor authenticatie verplicht voor alle gebruikers

24/7 security monitoring met geautomatiseerde incident detectie

Data residency binnen de EU (Azure West-Europe)

ISO 27001 certificering in voorbereiding (target Q1 2026)

End-to end enryptie voor alle data in transit en at rest

Certificeringen en Standaarden

Onesurance houdt zich aan de hoogste security en compliance standaarden in de financiële sector. Hieronder vindt u een overzicht van onze huidige certificeringen en roadmap.

Certificering

Beschrijving

Status

TRUST CENTER - BEDRIJFSCONTINUÏTEIT & DISASTER RECOVERY

Onesurance Bedrijfscontinuïteit & Disaster Recovery
Laatst bijgewerkt: December 2024

Ons Commitment

Bij Onesurance begrijpen we dat onze klanten in de verzekeringssector vertrouwen op de beschikbaarheid van ons platform voor hun dagelijkse operaties. Daarom hebben we uitgebreide plannen en procedures om de continuïteit van onze diensten te waarborgen, zelfs in geval van onverwachte gebeurtenissen. Onze aanpak combineert preventieve maatregelen, proactieve planning en geteste recovery procedures.

Service Level Agreement (SLA)

Uptime Commitment
• Uptime: 99.9% per maand (maximum 43 minuten downtime per maand)
• Calculation: (Total minutes - downtime) / Total minutes × 100%
• Measurement: Automated external monitoring via third-party tools
• Reporting: Monthly SLA reports beschikbaar via klantenportaal
• Service credits: Bij niet-halen SLA volgens contract

Exclusions
• Planned maintenance: Aangekondigd minimum 7 dagen van tevoren
• Force majeure: Buiten onze redelijke controle
• Customer-induced issues: Configuratiefouten, overbelasting door klant
• Third-party service failures: Buiten Azure core services
• Beta/experimental features: Expliciet gemarkeerd

Planned Maintenance
• Frequency: Maximaal 1× per maand
• Duration: Maximaal 4 uur
• Schedule: Buiten kantooruren (20:00 - 06:00 CET)
• Notification: Minimum 7 dagen vooraf, via email en status page
• Zero-downtime waar mogelijk: Gebruik van blue-green deployments

Support & Response Times

P1 - Critical (Service Down)
• Initial response: 15 minuten
• Updates: Elk uur tijdens incident
• Resolution target: 4 uur
• Escalation: Automatisch naar management

P2 - High (Significant Degradation)
• Initial response: 1 uur
• Updates: Elke 4 uur
• Resolution target: 8 uur (business hours)
• Escalation: Na 4 uur

P3 - Medium (Minor Impact)
• Initial response: 4 business hours
• Updates: Dagelijks
• Resolution target: 2 werkdagen
• Escalation: Bij vertragingen

P4 - Low (Question / Enhancement)
• Initial response: 1 werkdag
• Updates: Op verzoek
• Resolution target: 5 werkdagen
• Escalation: Niet vereist

Business Continuity Plan (BCP)

Scope

Het BCP dekt:
• IT infrastructure en services
• Application availability
• Data integrity en recovery
• Personnel continuity
• Supplier dependencies
• Customer communication
• Regulatory compliance

Niet binnen scope (separate plannen):
• Physical office recovery (werk from home mogelijk)
• Financial continuity (CFO verantwoordelijkheid)
• Legal continuity (Legal team)

Kritieke Bedrijfsprocessen

Tier 1 - Mission Critical (RTO: 4 uur, RPO: 5 min)
• Core platform availability
• API endpoints
• Database access
• Authentication en authorization
• Risk Engine processing
• Customer dashboards

Tier 2 - Important (RTO: 8 uur, RPO: 1 uur)
• Reporting en analytics
• Admin portals
• Batch processing
• Email notifications
• Integration webhooks

Tier 3 - Normal (RTO: 24 uur, RPO: 24 uur)
• Documentation portals
• Training materials
• Non-critical background jobs
• Marketing website

Business Impact Analysis (BIA)

Performed: Jaarlijks en bij significante wijzigingen
Last assessment: [Datum]

Impact Categories:
• Financial: Revenue loss, contractual penalties
• Operational: Inability to deliver services
• Reputational: Customer trust, brand damage
• Regulatory: Non-compliance, fines
• Contractual: SLA breaches

Acceptable Outage Windows:
• Tier 1: Maximum 4 uur before significant impact
• Tier 2: Maximum 8 uur
• Tier 3: Maximum 24 uur

Dependencies Identified:
• Microsoft Azure: Primary dependency
• Internet connectivity: Multiple ISPs
• Personnel: Key person dependencies mitigated
• Third-party services: Assessed en documented

Disaster Recovery (DR) Strategy

Recovery Objectives

RTO (Recovery Time Objective): 4 uur
• Definition: Maximum acceptable time to restore services
• Scope: Tier 1 (mission-critical) services
• Measurement: From incident start tot full functionality
• Target: 4 uur voor 99% van scenario's

RPO (Recovery Point Objective): 5 minuten
• Definition: Maximum acceptable data loss
• Scope: All customer data
• Measurement: Point-in-time van laatste successful backup
• Implementation: Continuous database replication

DR Architecture

Primary Site
• Region: Azure West-Europe (Amsterdam, NL)
• Availability Zones: 3 zones voor redundancy
• Configuration: Active multi-zone deployment

DR Site
• Region: Azure North-Europe (Ireland)
• Status: Warm standby (pre-provisioned, niet active)
• Data replication: Real-time voor databases
• Configuration sync: Automated via IaC

Failover Approach
• Databases: Automated failover groups (within EU)
• Applications: Manual failover na assessment
• DNS: Automated failover via Azure Traffic Manager
• Monitoring: Continues op DR site

DR Scenarios

Scenario 1: Availability Zone Failure
• Likelihood: Low (maar voorbereid)
• Detection: Automated health checks
• Response: Automatische failover naar andere zones
• Impact: Geen of minimaal (seconden)
• RTO: <5 minuten
• RPO: 0 (real-time replication)

Scenario 2: Regional Failure
• Likelihood: Very low (maar voorbereid)
• Detection: Monitoring, Azure status, alerts
• Response: Manual failover naar DR region na assessment
• Impact: Moderate (service interruption)
• RTO: 4 uur (includes assessment en coordination)
• RPO: 5 minuten

Scenario 3: Data Corruption/Deletion
• Likelihood: Low (access controls, backups)
• Detection: Monitoring, user reports, data validation
• Response: Point-in-time restore from backups
• Impact: Varies op affected scope
• RTO: 1-4 uur (depends on data volume)
• RPO: 5 minuten

Scenario 4: Ransomware/Malware
• Likelihood: Low (prevention controls)
• Detection: Antivirus, anomaly detection, monitoring
• Response: Isolate, eradicate, restore from clean backups
• Impact: Potentially high (if widespread)
• RTO: 4-8 uur (includes forensics, verification)
• RPO: 5 minuten (immutable backups)

Scenario 5: Major Azure Global Outage
• Likelihood: Very rare (maar onmogelijk uit te sluiten)
• Detection: Azure status dashboard, Microsoft communication
• Response: Escalate naar Microsoft, wait for resolution
• Impact: High (unable to failover within Azure)
• RTO: Depends on Microsoft resolution
• Contingency: Communication plan, alternative work

Backup Strategy

Database Backups
• Type: Automated Azure SQL Database backups
• Frequency: Continuous (point-in-time restore elk 5 min)
• Retention:

  • Short-term: 35 dagen point-in-time restore

  • Long-term: Weekly backups voor 10 jaar
    • Storage: Geo-redundant (replicated binnen EU)
    • Encryption: AES-256 voor all backups
    • Testing: Monthly restore tests

Application & Configuration
• Type: Infrastructure as Code (Terraform)
• Frequency: Upon every change (version controlled)
• Storage: Git repository met branching strategy
• Encryption: Repository level encryption
• Testing: Automated deployment tests in staging

File Storage (Blob Storage)
• Type: Geo-redundant storage (GRS)
• Frequency: Real-time replication
• Versioning: Enabled (30 dagen version history)
• Soft delete: 30 dagen retention
• Immutable storage: For compliance-critical data

Logs & Audit Data
• Type: Azure Log Analytics long-term retention
• Frequency: Continuous ingestion
• Retention: 1 jaar online, 7 jaar archived
• Storage: Immutable, encrypted
• Testing: Quarterly restore verification

Backup Testing
• Frequency: Monthly automated restore tests
• Scope: Sample databases, recent backups
• Validation: Data integrity checks, completeness
• Full DR test: Semi-annual (see DR Testing)
• Documentation: Test results logged en reviewed

DR Testing

Test Frequency & Types

Monthly: Backup Restore Tests
• Scope: Sample backup restore
• Environment: Isolated test environment
• Validation: Data integrity, completeness
• Duration: 2-4 uur
• Pass criteria: Successful restore zonder errors

Quarterly: Failover Tests
• Scope: Database failover naar DR region
• Environment: Non-production replicatie
• Validation: Failover success, data consistency
• Duration: 4 uur
• Pass criteria: RTO/RPO within targets

Semi-Annual: Full DR Simulation
• Scope: Complete platform failover, all tiers
• Environment: Controlled test met select customers
• Teams: All IR/DR teams, management
• Duration: 8 uur (full day exercise)
• Validation:

  • All services restored

  • Data integrity confirmed

  • Communication protocols followed

  • RTO/RPO targets met
    • Documentation: Comprehensive report
    • Improvements: Action items addressed

Annual: Disaster Scenario Exercise
• Scope: Realistic, complex disaster (tabletop + technical)
• Participants: Full company, key stakeholders
• Scenarios: Varied (ransomware, regional failure, etc.)
• Duration: Full day
• Outcomes:

  • Team preparedness assessed

  • Gaps identified

  • Plan improvements

  • Training needs

Test Documentation
• Test plan: Objectives, scope, success criteria
• Test execution: Step-by-step log
• Results: Pass/fail per component
• Issues: Problems encountered en resolutions
• Improvements: Recommendations voor plan updates
• Sign-off: Management approval

Personnel Continuity

Key Person Dependencies
• Identified: All critical roles mapped
• Mitigation: Cross-training, documentation
• Backup: Secondary person per critical role
• Knowledge transfer: Regular rotation

On-Call Rotation
• Coverage: 24/7 voor critical systems
• Rotation: Weekly rotation tussen engineers
• Escalation: Clear escalation paths
• Compensation: On-call allowance + overtime

Remote Work Capability
• Infrastructure: Cloud-based, accessible globally
• VPN: Available for secure remote access
• Equipment: Laptops provided, BYOD policy
• Testing: Regular remote work exercises
• Policy: Work-from-home as default for DR

Succession Planning
• Key roles: Backup identified voor management
• Documentation: Role responsibilities documented
• Transitions: Smooth handover procedures
• Authority: Delegated decision-making in DR

Supplier & Third-Party Continuity

Critical Suppliers

Microsoft Azure
• Risk: Primary infrastructure provider
• SLA: 99.99% with financial penalties
• DR: Multi-region, availability zones
• Monitoring: Azure status dashboard
• Escalation: Premier support contract
• Alternative: Not feasible (full migration = months)
• Mitigation: Multi-zone deployment, DR region

Bonsai Software (Development Partner)
• Risk: Development capacity dependency
• SLA: Best-effort support
• DR: Not critical (internal tooling)
• Mitigation: Knowledge documentation, code ownership
• Alternative: Internal team can maintain

Supplier Assessment
• Frequency: Annual supplier risk assessments
• Criteria: Financial stability, BC plans, security
• Documentation: Supplier BC plans on file
• Contracts: BC requirements in agreements
• Monitoring: Ongoing performance tracking

Communication Plan

Stakeholder Communication

Internal Team
• Channel: Microsoft Teams dedicated channel
• Frequency: Real-time updates during DR event
• Content: Status, actions, next steps
• Responsibility: DR Coordinator

Management/Executive
• Channel: Direct call, email, Teams
• Frequency: Every hour (P1), every 4 hours (P2)
• Content: Impact, progress, decisions needed
• Responsibility: CTO / DR Manager

Customers
• Channel: Email, status page, in-app notifications
• Frequency: Initial notification, then hourly updates
• Content:

  • Wat er aan de hand is

  • Welke services affected

  • Verwachte hersteltijd

  • Alternatieven (indien beschikbaar)

  • Volgende update timing
    • Responsibility: Customer Success + DR Manager

Regulatory Authorities
• Trigger: Significant incidents met compliance impact
• Timeline: Per regulatory requirements

  • AVG breach: Within 72 uur

  • DORA: Per incident classification
    • Content: Incident details, impact, remediation
    • Responsibility: FG / Compliance Officer

Media/Public (indien nodig)
• Trigger: High-profile incidents, media inquiries
• Approval: Executive team
• Spokesperson: Designated representative
• Message: Consistent met customer communication

Status Page
• URL: www.onesurance.ai/status
• Updates: Real-time status, incident history
• Subscriptions: Email/SMS notifications available
• Transparency: Public visibility van uptime

Templates
• Pre-approved templates for common scenarios
• Tone: Professional, transparent, empathetic
• Languages: Nederlands en Engels
• Review: Legal approval voor critical communications

Compliance & Regulatory

Regulatory Requirements

DORA (Digital Operational Resilience Act)
• Applies to: Insurance sector clients
• Requirements:

  • ICT risk management framework

  • Incident reporting (24h for major incidents)

  • Resilience testing (annual)

  • Third-party risk monitoring
    • Compliance: BC/DR plan aligned met DORA
    • Testing: Annual DORA-specific resilience test
    • Reporting: Incident classification en notification

AVG/GDPR
• Requirement: Data availability en integrity
• Backups: Encrypted, tested, retention compliant
• Data breach: Notification procedures (see Template 06)
• Impact assessments: DPIA voor BC/DR processes

Contractual Obligations
• SLA commitments: 99.9% uptime
• Data retention: Per contract specifics
• Notification: Customer notification bij incidents
• Testing: Right to witness DR tests

Documentation & Records

BC/DR Documentation
• Business Continuity Plan: This document
• Disaster Recovery Procedures: Technical runbooks
• Test results: All tests logged
• Incident reports: Post-incident analyses
• Risk assessments: Annual BIA
• Supplier assessments: Third-party evaluations

Record Retention
• Plans: Current version + 3 years history
• Tests: 7 years
• Incidents: 7 years
• Assessments: 7 years
• Availability: For auditors, regulators on request

Plan Maintenance

Update Triggers
• Annually: Scheduled comprehensive review
• Significant changes: New services, infrastructure, personnel
• Post-incident: After major incidents
• Post-test: After DR tests met identified gaps
• Regulatory changes: New compliance requirements
• Supplier changes: New/changed dependencies

Review Process

  1. Draft updates (DR Manager)

  2. Technical review (Engineering team)

  3. Compliance review (FG/Legal)

  4. Management approval

  5. Communication to stakeholders

  6. Training on changes

  7. Version control en distribution

Governance
• Owner: CTO
• Approver: Executive team
• Review cycle: Annual
• Next review: [Datum]

Continuous Improvement

Performance Metrics
• Actual RTO/RPO: Measured bij incidents en tests
• Test success rate: % passed DR tests
• Backup success rate: % successful backups
• Restore success rate: % successful restores
• Incident frequency: Number en severity
• Compliance: % compliant met requirements

Reporting
• Monthly: Backup en uptime reports
• Quarterly: BC/DR metrics dashboard
• Semi-annual: Full BC/DR program review
• Annual: Executive report voor Board

Improvement Sources
• Post-incident reviews: Lessons learned
• Test results: Gaps identified
• Industry best practices: Benchmarking
• Regulatory feedback: Auditor findings
• Technology advances: New capabilities

Customer Responsibilities

Shared Responsibility Model
• Onesurance provides: Platform availability, data backups
• Customer responsible for:

  • User access management

  • Data accuracy en quality

  • Export en archive van own reports (indien nodig)

  • Business continuity voor eigen operations

  • Alternative workflows during outages

Customer Actions During Outages
• Monitor status page: www.onesurance.ai/status
• Follow updates: Email notifications
• Use workarounds: Indien beschikbaar
• Report issues: Via support channels
• Avoid: Excessive retries (kan recovery vertragen)

Customer BC Planning
• Documentation: Maintain own BC plans
• Dependencies: Include Onesurance in risk assessment
• Testing: Include Onesurance scenario's
• Contacts: Keep contact lists current
• Escalation: Know how to reach us in emergencies

Contact

Voor business continuity en disaster recovery:
• 24/7 Incident Hotline: dpo@onesurance.ai
• BC/DR Planning: dpo@onesurance.ai
• SLA Questions: dpo@onesurance.ai

Laatst bijgewerkt: December 2024
Onesurance B.V. | Breda, Nederland | KvK: 87521997