Home » When Trust Isn’t Enough: How CTOs Can Build Cloud Outage Resilience

When Trust Isn’t Enough: How CTOs Can Build Cloud Outage Resilience

On November 18, 2025, Cloudflare experienced what it called its worst outage since 2019. For over three hours, HTTP 5xx errors cascaded across their global network, bringing down websites, APIs, and critical services worldwide. The culprit? A database permission change that doubled the size of a configuration file, exceeding hardcoded limits and triggering system-wide panics.

Just weeks earlier, AWS’s us-east-1 region, the workhorse of the cloud industry, suffered a cascading DNS resolution failure in DynamoDB that rippled through EC2, Lambda, CloudWatch, and SQS. Applications that had run flawlessly for years suddenly couldn’t resolve internal service endpoints. Traffic ground to a halt. Engineers scrambled to understand why their highly available architectures were returning errors.

Azure has faced its own share of control plane failures, where the very systems used to manage and orchestrate cloud resources became unavailable, leaving customers unable to deploy fixes or even assess the scope of their problems.

These aren’t isolated incidents from second-tier providers. These are the giant companies with virtually unlimited resources, world-class engineering talent, and years of operational experience. If they can experience catastrophic failures, what does that mean for the rest of us?

The uncomfortable truth is that too many engineering teams have built their infrastructure on a foundation of trust rather than design. 

  • They trust that AWS will stay up. 
  • They trust that their cloud provider’s SLA means their services will be available. 
  • They trust that “multi-AZ deployment” equals resilience.

But trust isn’t a strategy. Resilience isn’t something you inherit from your cloud provider, it’s something you architect, test, and continuously validate. The question every CTO and engineering leader must answer isn’t whether your cloud provider will fail, but what happens to your business when it does.

Anatomy of a modern outage: What goes wrong?

To build truly resilient systems, we first need to understand how modern cloud infrastructure actually fails. The patterns are actually very consistent across providers and incidents.

Single points of failure in control planes or DNS layers

Single points of failure in control planes or DNS layers remain the most vulnerable component of cloud architecture. In Cloudflare’s case, it was the Bot Management module in their core proxy, a component that every request traversed. When it panicked due to an oversized configuration file, there was no fallback path. The entire system failed. Similarly, AWS’s us-east-1 outage originated in DNS resolution for DynamoDB. DNS is so fundamental to cloud operations that when it fails, the cascading effects are immediate and severe.

Overreliance on popular regions

Overreliance on popular regions creates massive system-wide impact scenarios. AWS us-east-1 has become very popular not just for its outages, but for how many organizations have concentrated their entire infrastructure there. It’s where AWS launches new features first, where documentation defaults to, and where prices are often most competitive. The result is that when us-east-1 experiences problems, a disproportionate amount of the internet experiences problems too. The same pattern exists with Azure’s primary regions and other providers’ flagship data centers.

Regional interdependencies

Regional interdependencies mean that failures cascade faster than teams can respond. Modern cloud architectures aren’t collections of independent services, they’re deeply interconnected systems where failure propagates very rapidly. The Cloudflare incident affected Workers KV, which impacted Access, which prevented dashboard logins because Turnstile couldn’t load. Each service depended on the one before it, creating a chain reaction that expanded the outage far beyond its initial scope.

Inadequate DR planning or untested failover flows

Inadequate DR planning or untested failover flows plague even sophisticated organizations. Many companies have disaster recovery plans that look impressive in documents but have never been tested under realistic conditions. When the AWS outage hit, teams discovered that their failover procedures depended on control plane APIs that were themselves unavailable. Their carefully documented runbooks were useless because the systems they relied on to execute those runbooks were down.

The myth of “high availability” as default in cloud-native setups

The myth of “high availability” as default in cloud-native setups is perhaps the most dangerous assumption. Deploying across multiple availability zones provides resilience against certain types of failures like data center power loss, network partitions within a region, but does nothing to protect against region-wide issues, control plane failures, or service-level bugs. Multi-AZ deployment is a starting point, not a destination.

The Cloudflare post-mortem revealed another critical failure mode, their feature file generation system had assumptions baked into it that seemed reasonable when the code was written. The Bot Management system expected no more than 200 features, with current usage around 60. That seemed like plenty of headroom, until a database query started returning duplicate entries, doubling the feature count overnight. The code had a hardcoded limit, hit it, panicked, and brought down the global network.

These patterns, hidden assumptions, cascading failures, centralized failure domains, and untested recovery procedures are endemic in modern cloud architectures. They exist not because engineers are careless, but because building truly resilient systems is extraordinarily difficult and requires fighting against natural incentives toward consolidation, efficiency, and cost optimization.

Key principles of cloud infrastructure resilience

Building resilient infrastructure requires moving beyond trust-based thinking to design-based thinking. Here are the core principles that separate systems that survive outages from those that don’t.

1. Design for regional independence

The AWS us-east-1 outage taught a harsh lesson i.e no single region, no matter how mature or reliable, should be trusted with your entire production workload. Regional independence isn’t about having backup regions, it’s about having regions that can function completely autonomously when everything else fails.

  • Avoid production reliance on us-east-1 or a single cloud region: This is more than just spreading your infrastructure around. It means designing your architecture so that each region can operate without any dependencies on other regions or centralized services. When engineers at resilient organizations design multi-region systems, they ask: “If every other region and every control plane service disappeared right now, could this region keep serving traffic?” If the answer is anything less than an unqualified yes, the design isn’t truly independent.
  • Use cross-region replication for databases, services, and queues: Data is often the stickiest part of regional independence. Synchronous replication provides strong consistency but adds latency and creates a failure domain spanning multiple regions. If one region can’t acknowledge writes, the other can’t proceed. Asynchronous replication is faster but risks data loss during failover. The key is making an explicit, informed choice based on your actual requirements rather than accepting defaults. Many organizations discovered during recent outages that their replication wasn’t actually working the way they thought it was, or that their automated failover would result in unacceptable data loss.
  • Apply “blast radius” thinking: isolate services so failure in one doesn’t take down others: The Cloudflare outage demonstrated how a single component failure can cascade through an entire system. True resilience requires bulkheads architectural boundaries that contain failures. This might mean separate proxy layers for different service tiers, isolated authentication systems, or dedicated infrastructure for critical paths. The goal is ensuring that when something fails and something always will, the failure affects the smallest possible surface area.

2. Build in failover automation

Manual failover is too slow for modern outages. By the time your on-call engineer wakes up, joins the incident bridge, assesses the situation, and executes failover procedures, you’ve lost customers and revenue. Automated failover isn’t a luxury, it’s a requirement for maintaining availability during cloud disruptions.

  • Use Route 53 health checks with latency-based failover: DNS-based failover, when properly configured, can detect failures and reroute traffic in seconds. Route 53’s health checks can monitor not just endpoint availability but application-level health, testing specific URLs, checking response codes, and validating response content. Combined with latency-based routing, this creates a system that automatically directs users to the fastest, healthiest region without manual intervention. The key is setting appropriate health check thresholds. Too sensitive, and you’ll failover unnecessarily during transient issues. Too lenient, and you’ll keep sending traffic to degraded regions. Most successful implementations use multiple checks i.e fast checks that detect obvious failures within seconds, and slower, more comprehensive checks that validate full application functionality.
  • Blue/green and canary deployments across regions: These provide controlled rollout mechanisms that limit blast radius while maintaining the ability to roll back quickly. Blue/green deployments maintain two complete environments, one serving traffic, one standing by. When you deploy, you route traffic to the standby environment, validate it’s working correctly, then decommission the old environment. If something goes wrong, you simply route traffic back. Canary deployments are more gradual i.e route a small percentage of traffic to the new version, monitor key metrics, and progressively increase traffic if everything looks healthy. Automated rollback triggers, based on error rates, latency percentiles, or custom business metrics, can abort deployments before they impact significant user populations.
  • Automate rollbacks to last-known-good versions in under five minutes: The Cloudflare outage took hours to resolve partly because identifying and fixing the root cause took time. But they could have reduced customer impact dramatically by implementing faster rollback to a known-good configuration. Automated rollback requires maintaining versioned artifacts (container images, configuration files, infrastructure state) and having tested procedures to restore previous versions quickly. Five minutes might seem arbitrary, but it’s based on the reality that most major outages have their greatest customer impact in the first 10-15 minutes. If you can detect and automatically roll back within five minutes, you’ve contained the blast radius before it becomes catastrophic.

Regional independence also means thinking carefully about control plane dependencies. If your deployment system, monitoring system, and incident response tools all live in the same region as your production workload, you can’t deploy fixes when that region fails. Multi-region tooling isn’t just for serving user traffic it’s for operating your infrastructure under the worst possible conditions.

3. Disaster recovery isn’t optional

Disaster recovery is about validating that your architecture actually works when core dependencies fail.

  • Quarterly DR drills using fault injection or chaos engineering: This is the only way to know if your resilience measures actually work. Netflix pioneered this approach with Chaos Monkey, which randomly terminates instances in production. The principle is simple, if you never test your ability to survive failures, you don’t actually know if you can survive failures. Modern chaos engineering goes beyond terminating instances. Inject DNS failures to simulate the AWS us-east-1 outage. Artificially increase configuration file sizes to simulate the Cloudflare scenario. Disable cross-region replication to test what happens when your data stores diverge. Throttle API calls to control planes to simulate service degradation. The key is running these tests in production, or at least in production-like environments with realistic traffic patterns. Synthetic test environments often hide problems that only emerge under real-world conditions. Start small, maybe terminate a few instances during low-traffic periods and progressively increase the severity of your chaos experiments as you gain confidence.
  • Infrastructure as Code (IaC) enables repeatable DR environments: One often-overlooked aspect of disaster recovery is the ability to rebuild your entire infrastructure from scratch if necessary. If your production environment was built through years of manual changes, console clicks, and undocumented tweaks, recovering from a catastrophic failure becomes nearly impossible. IaC tools like Terraform, CloudFormation, or Pulumi define your entire infrastructure in code. This means you can version it, test it, and most importantly, recreate it in a different region or even a different cloud provider if necessary. When AWS us-east-1 fails, teams with comprehensive IaC can spin up equivalent infrastructure in us-west-2 or eu-west-1 in minutes rather than days.
  • Validate recovery objectives (RTO/RPO) through real testing: Recovery Time Objective i.e how quickly you can restore service and Recovery Point Objective i.e how much data you can afford to lose are meaningless unless you’ve actually measured them. Too many organizations have theoretical RTOs of “under an hour” that turn into “two days” during actual incidents because they discover unexpected dependencies, missing documentation, or broken automation. Real testing means simulating complete region failures and measuring actual recovery time. It means deliberately causing data loss scenarios and measuring how much data actually disappears. It’s uncomfortable and sometimes expensive, but it’s the only way to validate that your DR strategy actually works.

4. Observability is your early warning system

You can’t fix what you can’t see. During the Cloudflare outage, engineers initially suspected they were under a massive DDoS attack because the symptoms were unusual, intermittent failures that would recover briefly before failing again. Better observability might have led them to the root cause faster.

  • Real-time monitoring to catch symptoms before failures escalate: This requires instrumentation that goes deeper than basic uptime checks. Monitor error rates, latency percentiles (especially p95 and p99), throughput, and resource utilization. Set up derived metrics that indicate degradation before complete failure rising queue depths, increasing retry rates, elevated timeout counts. The Cloudflare incident showed an interesting pattern in their 5xx error rates: not a steady failure state, but oscillation between working and broken as good and bad configuration files alternated. That pattern was a signal that something was wrong with the generation or propagation mechanism. With the right dashboards and alerts, that could have been detected earlier.
  • Distributed tracing and logging to reduce MTTR (Mean Time To Resolution): This is essential for understanding cascading failures. When the AWS us-east-1 DynamoDB DNS failure cascaded into EC2, Lambda, CloudWatch, and SQS issues, teams needed to understand the dependency chain. Distributed tracing that spans service boundaries, regions, and even providers can map out these relationships in real time. Structured logging with consistent correlation IDs allows you to follow individual requests through your entire system. When something fails, you can trace back through the logs to see exactly which component in which region started returning errors first. This dramatically reduces the time spent diagnosing root causes during incidents.
  • Alert fatigue versus signal: Tuning SLOs to what really matters is one of the hardest problems in observability. Too many alerts, and your on-call engineers ignore them. Too few, and critical issues go undetected. The solution is careful tuning based on Service Level Objectives (SLOs) that reflect actual user experience. Instead of alerting on every instance of failure or temporary spike in errors, alert on SLO violations, sustained degradation that crosses thresholds you’ve defined as unacceptable. Use error budgets to distinguish between normal operational noise and genuine problems that require response. A single 5xx error isn’t an incident. A 1% error rate sustained over five minutes might be.

Why resilience needs to be a DevOps responsibility

Infrastructure resilience isn’t just an architecture problem, it’s an operational problem that requires continuous attention, automation, and cross-functional collaboration. This is fundamentally a DevOps challenge.

  • Resilience isn’t just infrastructure, it’s CI/CD, IaC, testing, and process: The Cloudflare outage was triggered by a configuration change propagated through their deployment pipeline. The problem wasn’t just that the configuration was wrong, it was that their pipeline didn’t validate it adequately before global distribution. Resilience requires treating configuration as code, with the same testing, validation, and progressive rollout you’d apply to application code. Your CI/CD pipeline is itself a resilience concern. If it depends on the same infrastructure as your production environment, you can’t deploy fixes during outages. DevOps teams should ensure deployment pipelines are multi-region, with the ability to push changes from any healthy region to any other region.
  • DevOps can own and automate DR runbooks: Manual runbooks, step-by-step instructions for responding to failures are better than nothing, but they’re slow, error-prone, and often out of date. DevOps automation can encode runbooks as scripts, playbooks, or workflows that execute consistently and quickly. When you’re under pressure during a major outage, automated runbooks eliminate human error and dramatically reduce response time.
  • Cross-team responsibility i.e Dev, Infra, and SRE must collaborate: Resilience failures often happen at the boundaries between teams. Developers might not understand the infrastructure failure modes they need to handle. Infrastructure teams might not know which application behaviors indicate health versus degradation. SRE teams might not have visibility into development roadmaps that introduce new dependencies. Breaking down these silos requires shared ownership of resilience. Joint chaos engineering exercises where developers, infrastructure engineers, and SREs work together to inject failures and observe system behavior. Shared on-call rotations where everyone experiences production issues firsthand. Collaborative post-mortems that focus on systemic improvements rather than individual blame.
  • Consider embedded DevOps teams focused on resilience readiness: Some organizations have found success with dedicated resilience teams, DevOps engineers whose primary job is improving system resilience through chaos engineering, DR testing, observability improvements, and architecture reviews. These teams don’t own production systems, but they provide expertise and tooling to help product teams build more resilient systems. This model works particularly well in larger organizations where individual product teams might lack deep expertise in multi-region architecture, failover automation, or chaos engineering. The embedded team provides consulting, conducts assessments, and builds shared tooling that all teams can leverage.

Common pitfalls in resilience strategy

Even organizations that invest in resilience often fall into predictable traps that undermine their efforts.

  • “We’re multi-AZ—that’s enough” (it’s not): Multi-availability-zone deployment protects against data center failures, network partitions within a region, and certain types of hardware failures. It does nothing to protect against region-wide outages, service-level bugs, or control plane failures. The AWS us-east-1 outage affected all availability zones simultaneously. Cloudflare’s issue impacted their global network regardless of which data center served requests. Multi-AZ is a foundation, not a complete resilience strategy.
  • No observability into DR success rates or test failures: Many organizations run quarterly DR tests and declare success if the environment comes up, without actually measuring whether the experience meets their stated objectives. Did the failover complete within your RTO? Was data loss within your RPO? Could users actually access the service during and after failover? Without measuring these outcomes, you don’t know if your DR strategy actually works. Worse, some teams run DR tests that consistently fail, restores don’t complete, failover takes hours instead of minutes, data corruption issues emerge but don’t treat these failures as urgent problems. DR test failures should be treated as production incidents, because they indicate your system won’t survive a real disaster.
  • Dev teams unaware of fallback paths or circuit breakers: Application code needs to be resilience-aware. When a downstream service is unavailable, does your code fail fast or does it retry indefinitely, tying up threads and resources? When a database is slow, does your application implement backoff and circuit breaker patterns, or does it hammer the database harder and make the problem worse? The Cloudflare outage affected Workers KV, which many applications depend on. Applications that had proper fallback logic, serving cached data, degrading gracefully, or redirecting to alternative services maintained partial functionality. Applications that assumed Workers KV would always be available simply failed.
  • Relying solely on your cloud provider’s SLA guarantees: Cloud providers offer SLAs that promise certain uptime percentages and provide credits when they fail to meet them. But SLA credits don’t compensate for lost revenue, damaged reputation, or customer churn. A 99.95% SLA means up to 4.38 hours of downtime per year is within acceptable parameters for your provider. Is it acceptable for your business? Moreover, SLA calculations often exclude outages attributed to factors outside the provider’s control. Regional failures sometimes fall outside SLA coverage. Service-specific issues might not trigger credits if the underlying compute remains available. Reading the fine print reveals that SLAs provide less protection than many organizations assume.

How Naviteq helps teams reduce downtime risks

Building and maintaining resilient infrastructure requires specialized expertise, significant time investment, and continuous attention. For many organizations, particularly those without large dedicated DevOps teams, achieving true resilience feels overwhelming.

  • Naviteq’s DevOps as a Service model provides embedded expertise specifically focused on resilience and availability. Rather than just implementing features or maintaining infrastructure, Naviteq teams architect multi-region systems designed to survive provider outages from the ground up. Building multi-region architecture with automated failovers is one of Naviteq’s core competencies. This includes designing regionally independent deployments, implementing cross-region data replication strategies appropriate to each application’s requirements, and setting up automated health checking and DNS-based failover that responds to failures within seconds rather than minutes. The difference between theoretical multi-region architecture and systems that actually survive outages comes down to details like properly configured health checks, validated failover procedures and tested rollback mechanisms. Naviteq’s teams have navigated these challenges across multiple clients and can implement proven patterns rather than experimenting with unproven approaches.
  • Embedded expertise to run chaos testing and DR readiness assessments helps organizations validate their resilience continuously rather than discovering gaps during actual incidents. Naviteq conducts regular chaos engineering experiments, injecting failures under controlled conditions to verify that systems respond correctly. These aren’t checkbox exercises, they’re realistic simulations designed to uncover genuine weaknesses. DR readiness assessments go beyond reviewing documentation to actively testing failover procedures, measuring actual RTO and RPO, and identifying hidden dependencies that would prevent successful recovery. Organizations often discover through these assessments that their DR strategy has critical gaps like missing data backups, untested procedures, or dependencies on services they didn’t realize existed.
  • Proven tools like IaC, GitOps, Route 53, Prometheus, ArgoCD, Terraform form the foundation of Naviteq’s approach to resilient infrastructure. Infrastructure as Code through Terraform ensures environments are reproducible and can be rebuilt in different regions or providers. GitOps practices through ArgoCD provide declarative, versioned infrastructure configurations with audit trails and easy rollback. Prometheus and associated tooling deliver observability that survives partial outages and provides early warning of degradation. These aren’t just technology choices, they’re an integrated toolchain that enables the automation, observability, and rapid response required for true resilience. The Naviteq team has deep expertise in configuring and operating these tools at scale, avoiding common pitfalls and implementing best practices learned across numerous deployments.
  • Supporting high-traffic clients during past outages has given Naviteq real-world experience in what resilience looks like under pressure. When AWS regions have experienced issues, Naviteq-managed systems have maintained availability through automated failover. When unexpected load spikes have overwhelmed single regions, multi-region architecture has absorbed the traffic. When deployment issues have emerged, automated rollback has contained the blast radius. This operational experience informs every architecture decision, every automation script, every monitoring dashboard. Resilience isn’t theoretical, it’s built from understanding how systems actually fail and what actually works when everything goes wrong.

Final thoughts – you can’t prevent outages, but you can prevent downtime

The lesson from recent major cloud outages isn’t that cloud providers are unreliable, it’s that no single system, no matter how sophisticated, can guarantee perfect availability. AWS will experience outages. Azure will experience outages. Cloudflare will experience outages. The question isn’t whether these failures will happen, but whether your systems are designed to survive them.

  • Architecture, automation, and observability are your best defense. Multi-region architecture ensures you’re not dependent on any single region or availability zone. Automated failover enables rapid response without requiring human intervention during the chaotic early minutes of an incident. Comprehensive observability provides the visibility needed to understand what’s failing and why. These aren’t separate initiatives, they’re integrated elements of a resilient system. Great architecture without automation means manual failover that takes too long. Great automation without observability means blind execution that might make problems worse. Great observability without good architecture gives you perfect visibility into a system that’s fundamentally fragile.
  • Resilience requires intentional design, not assumptions. The Cloudflare outage was triggered by assumptions that configuration files would stay below a certain size. The AWS us-east-1 outage exploited assumptions about DNS reliability. Your infrastructure almost certainly contains similar assumptions, limits that seem generous now but might be exceeded tomorrow, dependencies that seem reliable until they’re not, failover procedures that work in theory but haven’t been tested in practice. Intentional design means questioning those assumptions, testing failure modes, and continuously validating that your resilience measures actually work. It means treating reliability as a first-class requirement, not something you add later if you have time.
  • With the right DevOps strategy, outages become survivable events. The difference between companies that barely noticed the AWS us-east-1 outage and those that experienced major disruptions came down to preparation. Multi-region architecture meant they weren’t completely dependent on the failing region. Automated failover meant traffic rerouted before most users noticed problems. Well-tested DR procedures meant engineers knew exactly what to do. Outages will always be stressful, but they don’t have to be catastrophic. With proper preparation, a major provider outage becomes an operational incident rather than an existential threat.
  • Be proactive i.e test before the outage, not after. The time to discover that your DR procedures don’t work isn’t during an actual disaster. The time to find out that your automated failover has gaps isn’t when your primary region is down. The time to realize your observability has blind spots isn’t when you desperately need visibility into what’s failing. Regular chaos engineering, quarterly DR drills, continuous validation of your resilience measures, these practices feel expensive and time-consuming right up until you experience a major outage. Then they become the difference between brief degradation and extended downtime. Recent cloud disruptions have been wake-up calls for the entire industry. They’ve revealed that trust-based resilience, assuming your cloud provider will always be available isn’t sufficient. The organizations that thrive despite these outages are those that have invested in architecture-based resilience i.e systems designed from the ground up to survive provider failures.

The choice is yours. You can continue trusting that outages won’t happen to you, or you can design systems that don’t depend on that trust. You can hope your failover procedures work, or you can test them regularly and know they work. You can treat resilience as someone else’s problem, or you can make it a core DevOps responsibility. Because the next major cloud outage is already out there, waiting to happen. The only question is whether, when it arrives, your systems will stay online.

Don’t wait for the next major cloud outage to discover gaps in your cloud outage resilience strategy. Naviteq’s DevOps experts can help you assess your current architecture, identify single points of failure, and implement proven multi-region patterns that have survived real-world outages.

Ready to discover the gaps in your cloud outage resilience?

Contact Naviteq today to stress-test your failover strategy and audit your cloud architecture. Our team can conduct comprehensive resilience assessments, implement automated failover systems, run chaos engineering experiments, and build the observability infrastructure you need to maintain availability even when cloud providers can’t. Whether you’re running on AWS, Azure, GCP, or multi-cloud environments, we have the expertise to help you build systems that survive outages.

You might also like

Privacy Policy

1. Introduction

Naviteq is committed to protecting the privacy rights of data subjects.

“Naviteq”, “we,” and “us” refer to Naviteq Ltd. Israel (Check out our contact information.) We offer a wide range of software development services. We refer to all of these products, together with our other services and websites as “Services” in this policy.

This policy refers to the data we collect when you use our services or communicate with us. Examples include visiting our website, downloading our white papers and other materials, responding to our e-mails, and attending our events. This policy also explains your rights with respect to the data we collect about you. Data privacy of our employees is regulated in separate local acts and is not regulated by this policy.

Your information is controlled by Naviteq. If you have any questions or concerns about how your information is handled, please direct an inquiry to us at [email protected]. Alex Berber is our Data Protection Officer (DPO), with overall responsibility for the day-to-day implementation of this policy.

If you do not agree with this policy, please do not access or use our services, or interact with any other aspect of our business.

2. Data we gathered from our website’s users

When you visit our website, we collect usage statistics and other data, which helps us to estimate the efficiency of the content delivered. Processing data gathered from our website also helps us to provide a better user experience and improve the products and services we offer. We collect information through the use of “cookies,” scripts, tags, Local Shared Objects (Flash cookies), web beacons, and other related methods.

2.1. We collect the following categories of data:

  • Cookies and similar technologies (e.g., web beacons, pixels, ad tags and device identifiers)
  • Usage data, user behavior collected by cookies
What is a cookie?

HTTP cookie is a small piece of data that we send to your browser when you visit our website. After your computer accepts it or “takes the cookie” it is stored on your computer as an identification tag. Cookies are generally employed to measure website usage (e.g., a number of visitors and the duration of a visit) and efficiency (e.g., topics of interest to our visitors). Cookied can also used to personalize a user experience on our website. If necessary, users can turn off cookies via browser settings

2.2. How we process the data gathered

Naviteq and third-party providers we partner with (e.g., our advertising and analytics partners) use cookies and other tracking tools to identify users across different services and devices and ensure better user experience. Please see the list of them below.

2.2.1. Analytics partners

The services outlined below help us to monitor and analyze both web traffic and user behavior.

  • Google Analytics (Google LLC.) Google Analytics is a web analysis service provided by Google Inc. (Hereinafter in this document referred to as Google). Google utilizes the data collected to track and examine user behavior, to prepare reports, and share insights with other Google services. Google may use the data collected to contextualize and personalize the advertisements launched via Google’s advertising network. The service is subject to Google’s privacy policy. Google’s Privacy Policy
  • Google Tag Manager (Google LLC.) Google Tag Manager is a web service designed to optimize the Google Analytics management process. The service is provided by Google Inc. and is subject to the company’s privacy policy. Google’s Privacy Policy
  • Facebook Ads conversion tracking (Facebook, Inc.) Facebook Ads conversion tracking is an analytics service that binds data gathered from the Facebook advertising network with actions performed on Naviteq websites. The service is provided by Facebook, Inc. and is subject to the company’s privacy policy. Facebook’s Privacy Policy
  • Google AdWords Tools (Google AdWords Conversion Tracking/ Dynamic Remarketing / User List / DoubleClick) (Google LLC) Google AdWords conversion tracking and other Google Ads services are analytic instruments, that connect data from the Google AdWords advertising network with actions taken on Naviteq websites. The services are provided by Google Inc. and are subject to the company’s privacy policy. Google’s Privacy Policy
2.2.2. Advertising partners

User data may be employed to customize advertising deliverables, such as banners and any other types of advertisements to promote our services. Sometimes, these marketing deliverables are developed based on user preferences. However, not all personal data is used for this purpose. Some of the services provided by Naviteq may use cookies to identify users. The behavioral retargeting technique may also be used to display advertisements tailored to user preferences and online behavior, including outside Naviteq websites. For more information, please check the privacy policies of the relevant services.

  • Facebook Audience Network (Facebook, Inc.) Facebook Audience Network is an advertising service that helps to monitor and evaluate the efficiency of advertising campaigns launched via Facebook. The service is provided by Facebook, Inc. and is subject to the company’s privacy policy. Facebook’s Privacy Policy
  • Bing Ads (Microsoft Corporation). Bing Ads is advertising for launching and managing advertising campaigns across Bing search and Bing’s partner network. The service is provided by Microsoft Corporation and is subject to the company’s privacy policy. Microsoft Corporation’s Privacy Policy
  • Google AdWords (Google LLC) DoubleClick (Google Inc.) / DoubleClick Bid Manager / Google DoubleClick Google AdWords and Double Click are advertising services that enable efficient interaction with potential customers by suggesting relevant advertisements across Google Search, as well as Google’s partner networks. Google AdWords and Double Click are easily integrated with any other Google services—for example, Google Analytics—and help to process user data gathered by cookies. The services are provided by Google Inc. and are subject to the company’s privacy policy. Google’s Privacy Policy
  • LinkedIn Marketing Solutions / LinkedIn Ads (LinkedIn Corporation) LinkedIn Ads allow for tracking the efficiency of advertising campaigns launched via LinkedIn. The service is provided by LinkedIn Corporation and is subject to the company’s privacy policy. LinkedIn’s Privacy Policy
  • Twitter Advertising / Twitter Conversion Tracking (Twitter, Inc.) The Twitter Ads network allows for tracking the efficiency of advertising campaigns launched via Twitter. The service is provided by Twitter Inc. and is subject to the company’s privacy policy. Twitter’s Privacy Policy
2.2.3. Other widgets and scripts provided by partner third parties

In addition to advertising partners and analytics partners mentioned above, we are using widgets, which act as an intermediary between third-party websites (Facebook, Twitter, LinkedIn, etc.) and our website and allow us to provide additional information about us or our services or authorize you as our website user to share content on third-party websites.

  • Disqus (Disqus, Inc.) is a blog comment hosting service for websites and online communities that use a networked platform. Disqus integration into a corporate blog enables website users to submit a comment to any article posted on the blog after he/she authorizes it into a personal Disqus account. Disqus Privacy Policy
  • WordPress (WordPress.org) is a free and open-source content management system (CMS). WordPress Stats is the CMS’s analytics module, which gathers the following statistics: views and unique visitors, likes, followers, references, location, terms, words, and phrases people use on search engines (e.g., Google, Yahoo, or Bing) to find posts and pages on our website. The service also allows for gathering such data as clicks on an external link, cookies, etc. The service is subject to WordPress’s privacy policy.
  • Twitter Button and Twitter Syndication (Twitter, Inc.) allow you to quickly share the webpage you are viewing with all of your followers. Twitter Syndication enables users to implement a widget, which gathers information about the company’s Twitter profile and tweets. The services are provided by Twitter Inc. and are subject to the company’s privacy policy. Twitter’s Privacy Policy
  • Facebook Social Graph (Facebook, Inc.) is used to implement widgets to get data into and out of the Facebook platform. In our case, this widget is used to enable content sharing and display the number of sharings by Facebook users. The service is provided by Facebook, Inc. and is subject to the company’s privacy policy. Facebook’s Privacy Policy
  • LinkedIn Widgets (LinkedIn Corporation) are a quick way to infuse LinkedIn functionality into our website. We use this widget to enable content sharing and display the number of sharings by LinkedIn users. The service is provided by LinkedIn Corporation and is subject to the company’s privacy policy. LinkedIn’s Privacy Policy
  • OneSignal (OneSignal, Inc) is a push notification service. OneSignal’s Privacy Policy
  • ShareThis (ShareThis, Inc.) is a share button service. ShareThis Privacy Policy

2.3. Purposes and legal basis for data processing

Naviteq is gathering data via this service with a view to improving the development of our products or services. Data gathering is conducted on the basis of our or third party’s legitimate interests, or with your consent.

User data collected allow Naviteq to provide our Services and is employed in a variety of our activities that correspond our legitimate interests, including:

  • enabling analytics to draw valuable insights for smart decision making
  • contacting users
  • managing a user database
  • enabling commenting across the content delivered
  • handling payments
  • improving user experience (e.g., delivering highly personalized content suggestions) and the services delivered (e.g., a subscription service), etc.
  • providing information related to the changes introduced to our Customer Terms of Service, Privacy Policy (including the Cookie Policy), or other legal agreements

2.4. Data retention period

We set a retention period for your data — collected from our websites — to 1 year. We gather data to improve our services and the products we deliver. The retention period from our partners is set forth by them in their privacy policies.

2.5. Data recipients

We do not transfer the gathered data to third parties, apart from the cases described in the General data processing section or in this Section, as well as cases stipulated in our third partner’s privacy policies.

3. Data we gather from our web forms

3.1. We collect the following categories of data

When you fill out any of the forms located at our websites, you share the following information with us:

  • Name/surname
  • Position
  • Phone number
  • E-mail
  • Location
  • Company name
  • Any other information you provided to us from your request

3.2. How we process the data gathered

The information about the request is transferred to our CRM or Hubspot. Later, it may be used to contact you with something relevant to your initial request, provide further information related to the topic you requested, and deliver quality service.

By sharing personal information with us, you are giving consent for us to rightfully use your data for the following business purposes:

  • Send any updates regarding services you have shown interest in or provide further information related to the topic you requested.
  • Contact and communicate with you regarding your initial request. To get your consent to further contact you regarding any other services you might be interested in.
  • To get your consent to further contact you regarding any other services you might be interested in.
  • Maintenance and support activities of our CRM system and related activities.

All the information gathered via contact forms is processed by the following services:

  • WordPress (Privacy Policy)
  • Hubspot (Privacy Policy)
  • Gmail services that deliver notifications about the filled out contact forms to our employees (Privacy Shield)

3.3. Purposes and legal basis for data processing

If you fill out a contact form to get an expert’s take on your project or to get familiar with the services our company delivers, we process your data in order to enter into a contract and comply with our contractual obligations (to render Services), or answer to your request. This way, we may use your personal information to provide services to you, as well as process transactions related to the services you inquired about from us. For example, we may use your name or an e-mail address to send an invoice or to establish communication throughout the whole service delivery life cycle. We may also use your personal information you shared with us to connect you with other of our team members seeking your subject matter expertise. In case you use multiple services offered by our company, we may analyze your personal information and your online behavior on our resources to deliver an integrated experience. For example, to simplify your search across a variety of our services to find a particular one or to suggest relevant product information as you navigate across our websites.

With an aim to enhance our productivity and improve our collaboration—under our legitimate interest—we may use your personal data (e.g., an e-mail, name, job title, or activity taken on our resources) to provide the information we believe may be of interest to you. Additionally, we may store the history of our communication for the legitimate purposes of maintaining customer relations and/or service delivery, as well as we may maintain and support the system, in which we store collected data.

If you fill out contact forms for any other purpose, including the download of white papers or to request a demo, we process data with a legitimate interest to prevent spam and restrict the direct marketing of third-party companies. Our interactions are aimed at driving engagement and maximizing the value you get through our services. These interactions may include information about our new commercial offers, white papers, newsletters, content, and events we believe may be relevant to you.

3.4. Data retention period

We set a retention period for your data collected from contact forms on our websites to 1 year. This data may be further used to contact you if we want to send you anything relevant to your initial request (e.g., updated information on the white papers you downloaded from our websites).

3.5. Data recipients

We do not transfer data to third parties, apart from the cases described in the General data processing section and this section.

4. Data we gather from our web forms

4.1. We collect the following categories of data

When you answer a question and/or provide information via chatbot, you share the following information with us:

  • Name/surname
  • Position
  • Phone number
  • E-mail
  • Location
  • Company name
  • Any other information you provided to us from your request

4.2. How we process the data gathered

The information gathered is transferred to our CRM or Hubspot. Later, it may be used to contact you with something relevant to your initial request, provide further information related to the topic you requested, and deliver quality service.

By sharing personal information with us, you are giving consent for us to rightfully use and process in any way your data, including for the following business purposes:

  • Send any updates regarding services you have shown interest in or provide further information related to the topic you requested.
  • Contact and communicate with you regarding your initial request.
  • To get your consent to further contact you regarding any other services you might be interested in.
  • Maintenance and support activities of our CRM system and related activities, etc.

All the information gathered via chatbot is processed by the following services:

  • WordPress (Privacy Policy)
  • Gmail services that deliver notifications about the filled out contact forms to our employees (Privacy Shield)
  • Drift.com, Inc. (Privacy Policy)

4.3. Purposes and legal basis for data processing

If you share personal data via chatbot to get an expert’s take on your project or to get familiar with the services our company delivers, we process your data in order to enter into a contract and to comply with our contractual obligations (to render Services), or answer to your request. This way, we may use your personal information to provide services to you, as well as process transactions related to the services you inquired from us. For example, we may use your name or an e-mail address to send an invoice or to establish communication throughout the whole service delivery life cycle. We may also use your personal information you shared with us to connect you with other of our team members seeking your subject matter expertise. In case you use multiple services offered by our company, we may analyze your personal information and your online behavior on our resources to deliver an integrated experience. For example, to simplify your search across a variety of our services to find a particular one or to suggest relevant product information as you navigate across our websites.

With an aim to enhance our productivity and improve our collaboration—under our legitimate interest—we may use your personal data (e.g., an e-mail, name, job title, or activity taken on our resources) to provide information we believe may be of interest to you. Additionally, we may store the history of our communication for the legitimate purposes of maintaining customer relations and/or service delivery, as well as we may maintain and support the system, in which we store collected data.

If you share personal data via chatbot for any other purpose we process data with a legitimate interest to prevent spam and restrict direct marketing of third-party companies. Our interactions are aimed at driving engagement and maximizing value you get through our services. These interactions may include information about our new commercial offers, white papers, newsletters, content, and events we believe may be relevant to you.

4.4. Data retention period

We set a retention period for your data collected from communication with us via chatbot to 6 years. This data may be further used to contact you if we want to send you anything relevant to your initial request (e.g., updated information on your initial request, etc).

4.5. Data recipients

We do not transfer data to third parties, apart from the cases described in the General data processing section and this section.

5. Data we gather via e-mails, messengers, widgets, and phones

5.1. We collect the following categories of data

When you interact with us via any other means and tools, we gather the following information about you:

  • Name/surname
  • Position
  • Phone number
  • E-mail
  • Location
  • Company name
  • Any other information you provided to us from your request

The information about a customer call is stored in our internal system and includes a full call recording (starting the moment a connection was established), a voice recording if any available, a phone number, and a call duration.

5.2. How we process the data gathered

All the requests acquired via e-mail are stored within a business Gmail account of Naviteq located at the Google’s server. The information about the request is further transferred and stored in internal CRM either by employees of Naviteq manually or automatically for further processing according to our purposes. We may maintain and support the system, in which we store collected data.

5.3. Purposes and legal basis for data processing

When you contact us via any other means to get an expert’s take on your project / our services or to make any kind of a request, we process your data in order to enter into a contract, to comply with our contractual obligations (to render Services), or answer to your request.

This way, we may use your personal information to provide services to you, as well as process transactions related to the services you inquired from us. For example, we may use your name or an e-mail address to send an invoice or to establish communication throughout the whole service delivery life cycle. We may also use your personal information you shared with us to connect you with other of our team members seeking your subject matter expertise. In case you use multiple services offered by our company, we may analyze your personal information and your online behavior on our resources to deliver an integrated experience. For example, to simplify your search across a variety of our services to find a particular one or to suggest relevant product information as you navigate across our websites. With an aim to enhance our productivity and improve our collaboration, what is our legitimate interest, we may use your personal data—such as an e-mail, name, job title, or activity taken on our resources—to provide information we believe may be of interest to you. Additionally, we may store the history of our communication for the legitimate purposes of maintaining customer relations and/or service delivery.

If you communicate with us for any other purpose we process data with a legitimate interest to prevent spam and restrict direct marketing of third-party companies. Our interactions are aimed at driving engagement and maximizing value you get through our services. These interactions may include information about our new commercial offers, white papers, newsletters, content, and events we believe may be relevant to you or your initial request.

5.4. Data retention period

We set a retention period for the data collected to 6 years. This data may be further used to contact you if we want to send you anything relevant to your initial request.

5.5. Data recipients

We do not share data with third parties, apart from the cases described in the General data processing section and cases stipulated in our third partner’s privacy policies.

6. Data we gather if you are our customer

6.1. We collect the following categories of data

If you are our customer, you have already shared the following information with us to process:

  • Names/surnames of contact persons
  • Positions
  • Phone numbers
  • E-mails
  • Skype IDs
  • Company name/address
  • Any other information you provided to us during service delivery
  • History of our communication, etc.

6.2. How we process the data gathered

  • Information about the existing customers is transferred to our internal CRM (by our employees manually or automatically on receiving a contact form) and Hubspot (HubSpot, Inc. Privacy Policy) for further processing a customer request and providing relevant services, as well as developing recommendations on improving the services we deliver. We may further need any maintenance and support activities of our CRM system or any related activities.
  • To share contact information and information related to the services a customer is interested in, we may use the following messengers: Skype (Privacy Policy), Viber (Privacy Policy), WhatsApp (Privacy Policy), or Telegram (Privacy Policy), as well as e-mail services—Gmail (Privacy Policy) or Outlook (Privacy Policy)
  • To store and share project requirements or any other information submitted by a customer (e.g., a project budget estimation to deliver a suitable commercial offer, UI mockups submitted by a customer, test access to a customer system, etc.), we may use services of Google (Privacy Policy), Adobe (Privacy Policy), Microsoft Office (Privacy Policy), Atlassian (Privacy Policy), and Trello (Privacy Policy)
  • To provision phone calls in a distributed manner, Naviteq makes use of services to store historical data about the activities conducted.
  • To establish internal business processes within our departments and teams and to ensure timely request processing, we make use of Trello (Privacy Policy) and Atlassian (Privacy Policy). These services may store project information related to a technology stack, budget, roadmap, deadlines, Naviteq project team, etc.
  • To store the audio recordings of negotiations with a customer in order to clarify details if necessary and conduct meetings with previous, existing, and potential customers, we make use of GoToMeeting (Privacy Policy), and Hangouts (Privacy Policy), or Zoom (Privacy Policy).
  • To store case studies, describing a delivered project approved by a customer, we use an internal web portal—SharePoint Portal (Privacy Policy)—which only employees of Naviteq can access.
  • To provision contracts, all the data about the active customers is stored in a secured internal network resource with limited access. This resource is available only to our account managers or other employees concerned for the purpose of improving service delivery while establishing communication with a customer, issuing an invoice, and generating reports for a customer. Additional services Naviteq uses for issuing invoices Azets AS (Privacy Policy). These services process data in compliance with the privacy policies of the mentioned services.
  • Additionally, by sharing with us this information you are giving consent to contact you in order to get your consent for the possibility to contact you regarding any other services you might be interested in

6.3. Purposes and legal basis for data processing

We use personal data submitted for the following purposes:

To fulfill/comply with our contractual obligations or answer your request. For example, we use your name or an e-mail in contact to send invoices or communicate with you at any stage of the service delivery life cycle. This way, we may use your personal information to provide services to you, as well as process transactions related to the services you inquired from us. For example, we may use your name or an e-mail address to send an invoice or to establish communication throughout the whole service delivery life cycle. We may also use your personal information you shared with us to connect you with other of our team members seeking your subject matter expertise. In case you use multiple services offered by our company, we may analyze your personal information and your online behavior on our resources to deliver an integrated experience. For example, to simplify your search across a variety of our services to find a particular one or to suggest relevant product information as you navigate across our websites.

With an aim to enhance our productivity and improve our collaboration, what is our legitimate interest, we may use your personal data—such as an an e-mail, name, job title, or activity took on our resources — to provide the information we believe may be of interest to you and communicate with you in order to get your consent for a possibility to contact you regarding any other services you might be interested in. Additionally, we may store the history of our communication for the legitimate purposes of maintaining customer relations and/or service delivery as well as to maintain and support our CRM system and related activities.

6.4. Data retention period

We set the retention period for your data about our customer to 1 year from last Service delivery. We keep it to be able to reach you when we have something relevant to your initial request (for example, updated information on related services, news, events, updates, etc).

6.5. Data recipients

We do not share data with third parties, apart from the cases described in the General data processing section or in this section.

7. Data we gather from the attendees of our events

7.1. We collect the following categories of data

When you register or attend an event organized by Naviteq, you share the following information with us:

  • Names/surnames of contact persons
  • Positions
  • Phone numbers
  • E-mails
  • Skype IDs
  • Company name/address
  • Any other information you provided to us during service delivery
  • History of our communication, etc.

7.2. How we process the data gathered

Data about users who filled out a contact form is stored in our internal CRM, which shall be maintained and supported, and Hubspot (HubSpot, Inc. Privacy Policy) — by our employees manually or automatically on receiving a contact form — for further processing a customer request and providing relevant services, as well as developing recommendations on improving the services we deliver.

To share contact information, as well as information related to the events and services that may be of interest to a customer, Naviteq may use the following:

  • Messengers: Skype (Privacy Policy), Viber (Privacy Policy), WhatsApp (Privacy Policy), or Telegram (Privacy Policy)
  • E-mail services Gmail (Privacy Policy) or Outlook (Privacy Policy)
  • Social media platforms: LinkedIn (Privacy Policy)
  • VOIP phone and conferencing services: GoToMeeting (Privacy Policy), Hangouts (Privacy Policy) or Zoom (Privacy Policy).

To provide users with the possibility to register for an event organized by Naviteq and acquire tickets, we use Eventbrite (Privacy Policy).

To store and share information about attendees of the events organized by Naviteq, as well as to improve all the online activities related to such events, Naviteq makes use of the services of Google (Privacy Policy) and Microsoft (Privacy Policy)

To enable marketing activities and share information about relevant services provided by our company, we use remarketing and advertising instruments available through Google Adwords (Privacy Policy).

To build a strong community around the events organized by Naviteq and to interact with those interested in our services, we use Meetup.com (Privacy Policy).

To optimize internal processes and improve communication channels, we may use Atlassian (Privacy Policy) and Trello (Privacy Policy).

7.3. Purposes and legal basis for data processing

To establish efficient communication with customers about our services, we may use the following data:

  • To fulfill and comply with our contractual obligations or answer to your request. To maintain contract development, we use your contact data to send transactional information via e-mail, Skype, or any other communication means or services. Your contact data is also used to confirm your request, respond to any of your questions, inquiries, or requests, provide support, as well as send you any updates on the services we deliver.
  • To fulfill our legitimate interest, we use your contact information and information about your interaction with our services to send promotional materials that we find relevant to you via e-mail, Skype, or any other communication means or services. Our interactions are aimed at driving engagement and maximizing the value you get through our services. These interactions may include information about our new events, commercial offers, newsletters, content, and events we believe may be relevant to you. To fulfill our legitimate interest, we use your contact information which is stored at our CRM system in order to maintain and support our CRM system and carry on any related activities.

7.4. Data retention period

We set the retention period for your data about our customer to 6 years from the last event you have been registered. We keep it to be able to reach you when we have something relevant to your initial request (for example, updated information on calls, e-mail, etc.).

7.5. Data recipients

We do not share personal data with third parties, apart from the cases, which implies Naviteq is to provide a list of registrars to the organizer of the event with a view to ensuring an acceptable level of organization and security.

8. General data processing and data storage

Our processing means any operation or set of operations that is performed on personal data or on sets of personal data, such as collection, recording, organization, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction, support, maintenance, etc.

The retention period of storing data varies on its type. As the retention period expires, we either delete or anonymize personal data collected. In case data was transferred to backup storage and, therefore, cannot be deleted, we continue to store it in a secure fashion, but do not use it for any purpose. In all the other cases, we proceed with the deletion of data.

The information available through our websites that was collected by third parties is subject to the privacy policies of these third parties. In this case, the retention period of storing data is also subject to the privacy policies of these third parties.

To prevent spam, we keep track of spam and swindler accounts, which may be blocked through filtering at the server level.

A request containing words, which may be treated as spam-related or which may promote the distribution of misleading information, are filtered at the server level, as well as by company employees manually.

Data storage on our servers, as well as on cloud services provided by Google, Amazon, Hubspot, and on other services, inter alia Drift.com or other stipulated in this policy.

We do not make automated decisions, including profiling.

9. Your rights

Below, you will find a list of the rights you are subject to. Please note that some of the enlisted rights may be limited for the requests, which expose the personal information of another individual who is subject to the very same rights for privacy. In such a case, we will not be able to satisfy your request for data deletion if it contains information we are eligible to keep by law.

The right to be informed and to access information. You have legal rights to access your personal data, as well as request if we use this data for any purpose. Complying with our general policy, we will provide you with a free copy of your personal information in use within a month after we receive your request. We will send your information in use via a password-protected PDF file. For excessive or repeated requests, we are eligible to charge a fee. In case of numerous or complex requests, we are eligible to prolong our response time by as much as two additional months. Under such circumstances, you will be informed about the reasons of these extensions. In case, we refuse to address a particular request, we will explain why it happens and provide you with a list of further actions you are eligible to proceed. If shall you wish to take further action, we will require two trusted IDs from you to prove your identity. You may forward your requests to our Data Protection Officer ([email protected]). Please provide information about the nature of your request to help us process your inquiry.

The right for rectification. In case you believe, we store any of your personal data, which is incorrect or incomplete, you may request us to correct or supplement it. You also have the right to introduce changes to your information by logging into your account with us.

The right to erase, or “the right to be forgotten”. Under this principle, you may request us to delete or remove your personal data if there is no solid reason for your data continued processing. If you would like us to remove you from our database, please e-mail [email protected]). The right to be forgotten may be brought into force under the following reasons:

  • Data, which no longer has a relation to its original purpose for the collection.
  • You withdraw consent with respect to the original reason data was processed, and there is no other reason for us to continue to store and process your personal data.
  • You have objections to processing your personal data, and there are no overriding legitimate reasons for us to continue to process it.
  • Your personal data has been unlawfully processed.
  • Your personal data has to be deleted to comply with a legal obligation in a European Union or a Member State law to which Naviteq is subject.
  • Your personal data has been collected in relation to the offer of information society services.

The right to restrict processing. Under this right, you may request us to limit the processing your personal data. In this regard, we are eligible to store information that is sufficient to identify which data you want to be blocked, but cannot process it further. The right to restrict processing applies to the following cases:

  • Where you contest the accuracy of your personal data, we will restrict data processing until we have verified the accuracy of your personal data.
  • Where you have objected to data processing under legitimate interests, we will consider whether our legitimate interests override yours.
  • When data processing is unlawful, and you oppose data deletion and request restriction instead.
  • If we no longer need your personal data, but you require this data to establish, exercise or defend a legal claim.

If we have disclosed your personal data in question to third parties, we will inform them about the restriction on data processing, unless it is impossible or involves disproportionate effort to do so. We will inform you if we decide to lift a restriction on data processing.

The right to object. You are eligible to object to processing your personal data based on legitimate interests (including profiling) and direct marketing (including profiling). The objection must be on “grounds relating to his or her particular situation.” We will inform you of your right to object in the first communication you receive from us. We will stop processing your personal data for direct marketing purposes, as soon as we receive an objection.

The right to data portability. You are eligible to obtain your personal data, which is processed by Naviteq, to use it for your own purposes. It means you have the right to receive your personal data — that you have shared with us—in a structured machine-readable format, so you can further transfer the data to a different data controller. This right applies in the following circumstances:

  • Where you have provided the data to Naviteq.
  • Where data processing is carried out because you have given Naviteq your consent to do so.
  • Where data processing is carried out to develop a contract between you and Naviteq.
  • Where data processing is carried out automatically. (No membership data is processed using automated means, so this right does not apply).

Withdrawal of consent. If we process your personal data based on your consent (as indicated at the time of collection of such data), you have the right to withdraw your consent at any point in time. Please note, that if you exercise this right, you may have to then provide your consent on a case-by-case basis for the use or disclosure of certain personal data, if such use or disclosure is necessary to enable you to utilize some or all of our services.

Right to file a complaint. You have the right to file a complaint about manipulations applied to your data by Naviteq with the supervisory authority of your country or a European Union Member State.

10. Data security and protection

We use data hosting service providers in the United States and Ireland to store the information we collect, and we do use extra technical measures to secure your data.

These measures include without limitation: data encryption, password-protected access to personal information, limited access to sensitive data, encrypted transfer of sensitive data (HTTPS, IPSec, TLS, PPTP, and SSH) firewalls and VPN, intrusion detection, and antivirus on all the production servers.

The data collected by third-party providers is protected by them and is subject to their terms and privacy policies.

The data collected on our websites by Naviteq, as well as the data, which you entrust us under NDAs and contracts, is protected by us. We follow the technical requirements of GDPR and ensure security standards are met without exception.

Though we implement safeguards designed to protect your information, no security system is impenetrable and due to the inherent nature of the Internet, we cannot guarantee that data is absolutely safe from intrusion by others during transmission through the Internet, or while stored on our systems, or otherwise in our care.

11. Data transfer outside EEA

We collect information worldwide and primarily store this information in the United States and Ireland. We transfer, process, and store your information outside of your country of residence across regions wherever we or our third-party service providers operate for the purpose of delivering our services to you and for maintenance and support purposes. Whenever we transfer your information, we take precautionary measures to protect it. Thus, the data by third-party providers may be transferred to different countries globally for processing. These data transfers fall under the terms and privacy policies of these providers and (or) under standard data protection clauses.

The data collected by Naviteq may be transferred across our offices. Headquartered in Israel.

12. General description

We may supplement or amend this policy by additional policies and guidelines from time to time. We will post any privacy policy changes on this page. We encourage you to review our privacy policy whenever you use our services to stay informed about our data practices and the ways you can help to protect your privacy.

Our services are not directed to individuals under 16. We do not knowingly collect personal information from individuals under 16. If we become aware that an individual under 16 has provided us with personal information, we will take measures to delete such information.

If you disagree with any changes to this privacy policy, you will need to stop using our services.

Contact us

Your information is controlled by Naviteq Ltd. Israel If you have questions or concerns about how your information is handled, please direct your inquiry to Naviteq Ltd. Israel, which we have appointed as responsible for facilitating such inquiries.

Naviteq Ltd. Israel:

Israel, Tel Aviv, Alon Building 1, Yigal Alon St 94, Tel Aviv-Yafo

Phone/fax: +972 (58) 4448558

E-Mail: [email protected]