Home » 6 Kubernetes Anti-Patterns That Quietly Drain Your Budget (And How to Fix Them)

6 Kubernetes Anti-Patterns That Quietly Drain Your Budget (And How to Fix Them)

While you’re focused on shipping features and scaling your applications, silent infrastructure anti-patterns are quietly burning through your cloud budget. These aren’t obvious failures that trigger alerts or cause outages. They’re the subtle, persistent inefficiencies that compound month after month, turning what should be cost-effective container orchestration into an expensive resource drain.

After auditing dozens of Kubernetes deployments, our team at Naviteq has identified six common (and fixable) patterns that consistently appear across organizations of all sizes. From big- and medium-sized companies to fast-growing startups, these same mistakes show up repeatedly, silently wasting 30-60% of cloud spend on container infrastructure. This isn’t about being extremely frugal, it’s about smart engineering, understanding your true resource needs, adopting smart FinOps practices and eliminating waste. If you’re a CTO, VP Engineering, Platform Engineering Lead, Cloud Architect, or FinOps Manager, this one’s for you.

Every single one of these anti-patterns is fixable via kubernetes cost optimization tools and FinOps practices. Let’s dive into the six most expensive mistakes your team is probably making right now.

1. Overprovisioned CPU and memory

Overprovisioned CPU and memory is the most common form of resource wastage in a Kubernetes deployment. Many DevOps engineers generally just deploy an application, set some resource requests and limits, and call it a day. But how do they arrive at those numbers? Is it based on rigorous performance testing under realistic load? Or is it a gut feeling, a copy-paste from an older project, or just a safe option?

The problem

Without proper performance testing, resource requests and limits are often wild guesses. Teams tend to overprovision because it’s safer than underprovisioning and risking an outage. A pod asking for 4 CPU cores and 8GB of RAM might only ever use 0.5 CPU and 1GB. You’re paying for resources that are reserved but never used, both at the pod and node level. 

The goal isn’t to run everything at 100% utilization, that’s a recipe for performance issues. Aim for 60-70% average utilization on CPU and memory, with clear scaling policies to handle traffic spikes.

Why it hurts

  • Higher node costs: If your pods are requesting more resources than they need, Kubernetes will schedule them on larger and more expensive nodes or scale up more nodes than necessary.
  • Reduced bin packing efficiency: Overprovisioned pods make it harder for the scheduler to efficiently pack pods onto nodes. This leads to more fragmented resources and idle capacity.
  • False sense of security: Overprovisioning feels like your application has plenty of headroom, but in reality, you’re just paying for empty space.

The fix

  1. Performance test religiously: Implement performance testing as part of your CI/CD pipeline. Measure actual CPU, memory, network, and disk I/O usage for your applications. Use tools like K6, JMeter, or Locust to load test your applications with realistic traffic patterns and measure resource consumption.
  2. Right-size requests and limits: Based on your performance testing, set resource requests to the minimum required for stable operation and limits slightly above that to catch unexpected spikes without crashing the node.
  3. Implement Vertical Pod Autoscaler (VPA): Vertical Pod Autoscaler (VPA) can analyze your actual resource usage and recommend appropriate requests. Deploy VPA in recommendation mode first to understand your real consumption patterns before making changes.
  4. Leverage Horizontal Pod Autoscaler (HPA) and KEDA: HPA scales the number of pods based on metrics like CPU, memory etc. Horizontal Pod Autoscaler (HPA) can help optimize resource allocation by scaling replicas based on actual demand rather than trying to over-provision individual pods. KEDA extends HPA to scale based on external event sources like message queue length, database connections.
  5. Consider Karpenter for node autoscaling: Karpenter is an open-source, high-performance Kubernetes cluster autoscaler that optimizes node provisioning. Unlike the traditional Cluster Autoscaler, Karpenter can provision exactly the right size and type of node for your pending pods, often leading to significant cost savings by reducing wasted node capacity.

Action item

Audit your top most expensive deployments and check if their resource requests align with actual usage under load.

2. Idle nodes

Kubernetes nodes that sit mostly empty are one of the most expensive infrastructure anti-patterns you can have. Unlike idle resources within busy nodes, completely underutilized nodes represent pure waste, you’re paying full price for compute capacity that provides minimal value.

The problem

Nodes that are underutilized, or entirely idle, are one of the most common forms of unnecessary cloud costs. This can happen when:

  • Poor bin packing: When pods have mismatched resource requests or anti-affinity rules spread workloads inefficiently, you end up with fragmented capacity across multiple nodes. A node might be 30% utilized but unable to schedule additional pods due to resource constraints or scheduling rules.
  • Node pool strategy flaws: Many teams create separate node pools for different workload types without considering utilization patterns. You might have dedicated pools for batch jobs, web services, and background tasks, each running at low utilization because workloads don’t overlap efficiently.
  • Lack of scheduling constraints: Without proper resource requests, affinity/anti-affinity rules, tolerances, and taints, pods can spread out haphazardly, preventing efficient packing. 
  • Scaling up too aggressively: Your cluster autoscaler might be configured to scale up too quickly or too many nodes at once, and then take too long to scale them back down.

Why it hurts

  • Direct cloud spend: You pay for every hour a node is running, regardless of how much of its capacity is being used.
  • Increased management overhead: More nodes mean more need for maintenance, patching, and monitoring.

The fix

  1. Optimize resource requests and limits: Monitor bin packing efficiency with dashboards that show resource requests vs. actual node capacity. Alert when nodes consistently run below utilization thresholds, and right-size them proactively to reduce wastage.
  2. Implement Cluster Autoscaler or Karpenter: A robust autoscaler is crucial. It should scale nodes up when demand increases and, critically, scale them down when they are underutilized. Karpenter, with its ability to provision custom-fit nodes, often outperforms the traditional Cluster Autoscaler in cost efficiency.
  3. Consolidate node pools: In many cases, fewer, more flexible node pools work better than highly specialized ones. Consider using taints and tolerations to handle special requirements while maintaining efficient packing.
  4. Use Pod Disruption Budgets (PDBs) wisely: PDBs are important for application availability but overly restrictive PDBs can sometimes hinder the autoscaler’s ability to evict pods and scale down nodes.
  5. Leverage node affinity and anti-affinity: Use nodeSelector or nodeAffinity to place specific workloads on specific types of nodes only when necessary. Use podAntiAffinity to spread critical services across different nodes, but be mindful of how this might affect bin packing. Implement proper pod anti-affinity and affinity rules to pack compatible workloads together.
  6. Set pod priority and preemption: Define higher priorities for critical workloads to ensure they get scheduled even during resource contention. This also allows the removal of lower-priority pods from underutilized nodes.

Action item

Review your cluster autoscaling logs and your node utilization metrics. Identify any nodes that have consistently low CPU/memory utilization over extended periods.

3. Unused DaemonSets

DaemonSets are designed to run a copy of a pod on all or some nodes in a cluster. They’re essential for things like logging agents, monitoring agents, and network proxies.

The problem

Many Kubernetes distributions, especially managed services like GKE, come with a suite of default DaemonSets. While some are critical, others might be providing functionality you don’t use, already have an alternative for, or simply don’t need in certain environments (e.g., a development cluster). DaemonSets are resource multipliers i.e. every node runs a copy of every DaemonSet pod.

This problem is particularly acute in Google Kubernetes Engine (GKE) Autopilot clusters, where the platform deploys multiple system DaemonSets by default. This includes logging agents, monitoring collectors, security scanners, and network plugins.

Why it hurts

  • Direct resource consumption: Each DaemonSet pod requests CPU and memory, even if it’s minimal. When you have multiple nodes and many DaemonSets the resource consumption increases drastically.
  • Hidden costs in autopilot: In Autopilot, you don’t control the base VM image, so you’re implicitly paying for all the system DaemonSets that Google includes. If you’re not auditing them, you’re paying for features you may not even be using.
  • Increased attack surface: Every running container is a potential vulnerability, even if it’s a system component.
  • Legacy DaemonSets: Teams often install monitoring agents, log shippers, or security tools, then migrate to different solutions without cleaning up the old deployments. The unused DaemonSets continue running indefinitely, consuming resources and incurring costs.

The fix

  1. Audit all DaemonSets: Conduct monthly audits of all DaemonSets in your clusters. For each one, document its purpose, owner, and whether it’s actively used.
  2. Understand their purpose: For each DaemonSet, research what it does. Is it essential for cluster operation? Is it providing a service you actively use and rely on?
  3. DaemonSet approval process: Create a DaemonSet approval process for new deployments. Before allowing cluster-wide pod deployment, require teams to justify the need and document the expected resource consumption.
  4. Disable/remove unnecessary DaemonSets: If a DaemonSet is not serving a critical purpose for your team or application, remove it. Be cautious with system DaemonSets in managed Kubernetes services, consult documentation or Kubernetes experts like that from Naviteq before going through this process.
  5. Use node selectors for DaemonSets: If a DaemonSet is only needed on a subset of nodes, use nodeSelector or nodeAffinity to restrict where it gets scheduled. This prevents it from running unnecessarily on all nodes.

Action item

List all DaemonSets in your cluster using kubectl get daemonsets –all-namespaces. For any that are not explicitly deployed by your team, research their purpose and determine if they are truly essential for your operations.

4. Too many clusters

Kubernetes clusters aren’t free. Each cluster comes with control plane costs, ingress controllers, DNS servers, monitoring agents, and all the operational overhead of maintaining a separate environment.

The classic anti-pattern is one cluster per environment: development, staging, performance testing, integration testing, user acceptance testing, and production. Each cluster runs its own ingress, monitoring, logging, and security infrastructure, multiplying operational costs and complexity. While isolation has its benefits, the “one cluster per stage/team” mentality often leads to significant, unnecessary cost and complexity.

The problem

Each Kubernetes cluster incurs a baseline cost, regardless of how many applications are running on it. This includes:

  • Control plane costs: The masters, etcd, and associated components all consume resources and incur charges. For self-managed clusters you just cover this expense by your budget & people, for managed clusters in the cloud – you pay for the cloud provider.
  • Networking costs: Load balancers, NAT gateways, VPN connections, and internal DNS services are often duplicated across clusters.
  • Monitoring and logging costs: Separate instances of Grafana, Prometheus, Elasticsearch, Splunk, or cloud-provider logging solutions for each cluster.
  • Ingress controllers and service meshes: Duplicated installations and configurations.
  • Tooling and automation: Managing CI/CD pipelines, security scanning, and policy enforcement across many clusters creates redundant work.
  • Idle capacity: Smaller clusters are less efficient at bin packing. You’re more likely to have idle capacity across several small clusters than in one well-managed, larger cluster.

Why it hurts

  • Direct cloud spend: All the duplicated infrastructure and management components add up quickly.
  • Increased operational overhead: Managing, upgrading, and securing numerous clusters is far more complex and resource-intensive than managing a few well-designed ones.
  • Configuration drift: It becomes harder to maintain consistent configurations and policies across many clusters, leading to potential security gaps and operational issues.
  • Certificate management: It becomes a nightmare with multiple clusters. TLS certificates, service mesh configurations, and security policies need to be maintained independently, increasing both operational overhead and the probability of configuration drift.

Action item

Draw an architecture diagram of your current cluster setup. For each cluster, list its purpose and the duplicated services it hosts. Identify opportunities for consolidation using namespaces.

5. Always-on non-prod environments

Non-production environments that run 24/7 represent some of the most wasteful cloud spending in modern organizations. Development and testing workloads typically see heavy usage during business hours and sit completely idle during nights and weekends, yet most teams leave these environments running continuously.

The problem

Non-production environments (development, testing, staging, UAT) are critical for development and testing. However, they are rarely used continuously. Leaving them running when not in active use is a waste of resources.

Why it hurts

  • Continuous cloud billing: You pay for the compute services (cpu, memory, and any associated services (databases, caches, storage) 24/7, even if they’re only used for 8-10 hours a day.
  • Complicated shutdown process: Database and stateful services complicate the shutdown process. Teams often keep entire clusters running because they don’t want to manage the complexity of stopping and starting persistent workloads. This leads to situations where a single database keeps dozens of other services running unnecessarily.

The fix

  1. Implement scale-to-zero for non-prod: Implement automated scaling policies that shut down non-production environments outside business hours. Tools like can automatically scale deployments to zero replicas during off-hours and scale them back up when needed.
  2. Cron-based scaling: Use cron-based scaling for predictable schedules. Most development teams work standard business hours, making it straightforward to implement scaling policies that match usage patterns. For example, scale down at 6 PM and scale up at 8 AM automatically.
  3. Automate CI runner scaling: CI/CD tools like GitHub Actions provide you ability to self-host the runners (for secure and stable access to your internal workloads) and CI/CD infrastructure should scale to zero when not in use. Many organizations run dedicated build agents continuously when they could be provisioning runners on-demand, consuming resources only during active builds.
  4. Develop a “Shutdown” and “Startup” process: Implement weekend shutdown policies for development and staging environments unless there’s a specific business need for weekend availability.

Action item

Identify all non-production environments. For each, determine its active usage hours. Implement a strategy to scale it down during off-hours.

6. No autoscaling or misconfigured HPA/KEDA

Autoscaling is supposed to optimize costs by matching resource allocation to actual demand. When configured correctly, it is great for efficiency and cost savings but when poorly configured autoscaling often creates more waste than manual resource management.

The problem

Many teams enable Horizontal Pod Autoscaler (HPA) or KEDA without fully understanding the underlying metrics, thresholds, and scaling behavior.

  • Too aggressive scaling: HPA/KEDA configured to scale up too quickly or based on overly sensitive metrics can lead to “thrashing,” where pods are constantly being added and removed. This incurs scheduling overhead and potentially overprovisioning.
  • Insufficient cool-down periods: If the scale-down cool-down period is too short, the autoscaler might remove pods only to add them back moments later. If it’s too long, you’re paying for idle pods for an extended duration.
  • Wrong metrics: Scaling based on metrics that don’t truly reflect application load (e.g., scaling on CPU for an I/O-bound application) will be ineffective and wasteful.
  • Missing resource requests: HPA relies on resource requests to make informed scaling decisions. If your pods don’t have accurate resource requests, HPA will struggle to make good choices.
  • Varying workload patterns: A “one-size-fits-all” HPA configuration might not work for all applications, especially those with spiky or unpredictable traffic.

Why it hurts

  • Unnecessary pod sprawl: Overly aggressive scaling creates more pods than needed, increasing resource consumption and cloud costs.
  • Increased scheduling overhead: The Kubernetes scheduler has to work harder, and nodes might constantly be adjusting their capacity.
  • Performance degradation: Constant scaling up and down can sometimes lead to temporary performance dips as new pods initialize or connections are drained.
  • Horizontal Pod Autoscaler (HPA) misconfigurations: HPA misconfigurations are particularly costly. Teams often set overly sensitive scaling thresholds that cause constant pod creation and destruction.
  • KEDA (Kubernetes Event-Driven Autoscaling) misconfigurations: KEDA misconfigurations can be even more problematic when misconfigured. Queue-based scaling without proper understanding of message processing rates leads to over-provisioning or under-provisioning that hurts both performance and cost.

The fix

  1. Tune HPA/KEDA parameters meticulously:
    • Target average utilization: Start with 50-70% CPU/memory utilization as a target.
    • minReplicas and maxReplicas: Set these values realistically. minReplicas should handle baseline load while maxReplicas should be a sensible upper limit to prevent runaway costs.
    • scaleUp and scaleDown Stabilization windows: These are crucial. scaleUp should be short enough to react to demand. scaleDown should be longer to prevent thrashing and ensure load has truly subsided.
  2. Choose the right metrics: Don’t just rely on the CPU. For I/O-bound applications, scale on network I/O or disk operations. For message queues, use queue length. KEDA excels here by allowing scaling on virtually any external metric.
  3. Combine HPA with VPA: Use VPA to right-size individual pods, and HPA to scale the number of those right-sized pods. They work best in tandem.
  4. Monitor HPA events and behavior: Watch your HPA events (kubectl describe hpa <name>) and observe how your pod counts fluctuate alongside your chosen metrics.
  5. Test under load: Test autoscaling behavior under controlled load conditions. Many teams deploy autoscaling configurations without validating they work as expected, leading to poor performance during actual traffic spikes.
  6. Conservative scaling policies: Start with conservative scaling policies and tune them based on actual behavior. Set higher thresholds for scale-up events (70-80% utilization) and implement longer cool-down periods to prevent flapping behavior.

Action item

Review the HPA/KEDA configurations for your most critical and traffic-heavy applications. Check if the minReplicas, maxReplicas, and stabilizationWindow settings are optimized for your workload patterns.

Bonus tip: use Grafana dashboards and alerts to track all of this

Visibility is the foundation of cost optimization. Without proper monitoring and alerting, these anti-patterns will continue burning money in the background while your team focuses on feature development and operational fires. You can’t optimize what you can’t measure. A robust monitoring and alerting setup is essential to prevent cloud waste. Monitoring and log management services provided by our experts here at Naviteq, can help you establish a production grade setup for your Kubernetes deployments.

The fix

  1. Centralized monitoring: Use Prometheus and Grafana (or your cloud provider’s equivalent) to collect and visualize metrics from your entire Kubernetes environment.
  2. Custom or existing (from the community) Grafana dashboards: Create dashboards that highlight key cost-related metrics:
    • Node CPU/Memory utilization
    • Pod CPU/Memory requests vs. usage 
    • HPA/KEDA scaling events and replica counts
    • Cluster Autoscaler logs 
    • Network egress costs
  3. Proactive alerts: Set up alerts for:
    • Nodes with consistently low utilization
    • Pods with significant discrepancies between requests and actual usage
    • Spikes in cluster costs or node counts
    • Unusual DaemonSet resource consumption
    • Non-prod environments running during off-hours
  4. Track key cost metrics: Grafana dashboards (or other visualization) should track key cost metrics across all your clusters, such as resource utilization by node and namespace, scaling events over time, DaemonSet resource consumption, and environment runtime patterns.
  5. Create alerts for cost anomalies: Nodes consistently running below utilization thresholds, unusual scaling activity, or unexpected resource consumption spikes. These alerts should trigger investigation before small inefficiencies become major cost problems.

Action item: Ensure you have comprehensive Grafana dashboards and alerts in place to give you real-time visibility into your Kubernetes costs and resource utilization.

Conclusion

These six anti-patterns aren’t edge cases, they’re common, fixable problems that exist in most Kubernetes environments today. The difference between organizations that manage cloud costs effectively and those that don’t isn’t just technical sophistication, it’s the planning, skillset and discipline to implement proper processes, tooling, and cultural practices around cost optimization. At Naviteq, our experts bring years of hands-on experience in Kubernetes cluster management to help organizations identify and eliminate these costly anti-patterns. We’ve guided countless companies through comprehensive cost optimization transformations, turning wasteful clusters into efficient, budget-conscious operations.

Start by auditing your DaemonSets, review your non-production environment schedules, and check your resource utilization patterns. These changes can often reduce costs by 20-30% within the first month. Our seasoned Kubernetes specialists can accelerate this process, leveraging proven methodologies and battle-tested tools to deliver results faster than internal teams working alone. Naviteq’s experts don’t just fix immediate problems, we establish sustainable practices and train your teams to maintain usage of FinOps practices.

Kubernetes cost optimization isn’t a one-time project, it’s an ongoing cultural practice that requires consistent attention and reinforcement. Your Kubernetes clusters can be cost-effective, scalable, and performant simultaneously. It just requires the right approach to resource management, environment lifecycle, and operational practices.

Ready to stop the quiet bleed on your cloud budget?

Contact Naviteq today for a free consultation and discover how much you could be saving with automated Kubernetes cost optimization strategies. Our experts can audit your current setup and provide concrete recommendations for reducing cloud spend without compromising performance. We’ll help you identify your biggest areas of waste and implement a tailored Kubernetes cost optimization strategy that delivers real results.

 

 

 

Frequently Asked Questions

Most organizations see 30-60% reduction in their container infrastructure costs within the first 3-6 months. The biggest savings typically come from consolidating clusters and implementing proper autoscaling policies.

VPA works well in production for workloads with predictable resource patterns, but avoid using it with HPA simultaneously. Start with VPA recommendations in monitoring mode before enabling automatic resource adjustments.

Karpenter provisions right-sized nodes faster and scales down more aggressively than Cluster Autoscaler, typically resulting in better cost efficiency.

Perform regular DaemonSet audits and quarterly cluster topology reviews. Set up automated alerts for resources with consistently low utilization to catch waste before it becomes expensive.

Yes, with proper backup procedures and automated startup scripts. Most teams save significant resources on non-prod environments by shutting them down during off-hours while maintaining developer productivity through quick environment restoration. That said, you know your applications better. So, the final decision and responsibility is on your side. From our side we can always consult you and find the most cost-efficient way in your specific case.

You might also like

Privacy Policy

1. Introduction

Naviteq is committed to protecting the privacy rights of data subjects.

“Naviteq”, “we,” and “us” refer to Naviteq Ltd. Israel (Check out our contact information.) We offer a wide range of software development services. We refer to all of these products, together with our other services and websites as “Services” in this policy.

This policy refers to the data we collect when you use our services or communicate with us. Examples include visiting our website, downloading our white papers and other materials, responding to our e-mails, and attending our events. This policy also explains your rights with respect to the data we collect about you. Data privacy of our employees is regulated in separate local acts and is not regulated by this policy.

Your information is controlled by Naviteq. If you have any questions or concerns about how your information is handled, please direct an inquiry to us at [email protected]. Alex Berber is our Data Protection Officer (DPO), with overall responsibility for the day-to-day implementation of this policy.

If you do not agree with this policy, please do not access or use our services, or interact with any other aspect of our business.

2. Data we gathered from our website’s users

When you visit our website, we collect usage statistics and other data, which helps us to estimate the efficiency of the content delivered. Processing data gathered from our website also helps us to provide a better user experience and improve the products and services we offer. We collect information through the use of “cookies,” scripts, tags, Local Shared Objects (Flash cookies), web beacons, and other related methods.

2.1. We collect the following categories of data:

  • Cookies and similar technologies (e.g., web beacons, pixels, ad tags and device identifiers)
  • Usage data, user behavior collected by cookies
What is a cookie?

HTTP cookie is a small piece of data that we send to your browser when you visit our website. After your computer accepts it or “takes the cookie” it is stored on your computer as an identification tag. Cookies are generally employed to measure website usage (e.g., a number of visitors and the duration of a visit) and efficiency (e.g., topics of interest to our visitors). Cookied can also used to personalize a user experience on our website. If necessary, users can turn off cookies via browser settings

2.2. How we process the data gathered

Naviteq and third-party providers we partner with (e.g., our advertising and analytics partners) use cookies and other tracking tools to identify users across different services and devices and ensure better user experience. Please see the list of them below.

2.2.1. Analytics partners

The services outlined below help us to monitor and analyze both web traffic and user behavior.

  • Google Analytics (Google LLC.) Google Analytics is a web analysis service provided by Google Inc. (Hereinafter in this document referred to as Google). Google utilizes the data collected to track and examine user behavior, to prepare reports, and share insights with other Google services. Google may use the data collected to contextualize and personalize the advertisements launched via Google’s advertising network. The service is subject to Google’s privacy policy. Google’s Privacy Policy
  • Google Tag Manager (Google LLC.) Google Tag Manager is a web service designed to optimize the Google Analytics management process. The service is provided by Google Inc. and is subject to the company’s privacy policy. Google’s Privacy Policy
  • Facebook Ads conversion tracking (Facebook, Inc.) Facebook Ads conversion tracking is an analytics service that binds data gathered from the Facebook advertising network with actions performed on Naviteq websites. The service is provided by Facebook, Inc. and is subject to the company’s privacy policy. Facebook’s Privacy Policy
  • Google AdWords Tools (Google AdWords Conversion Tracking/ Dynamic Remarketing / User List / DoubleClick) (Google LLC) Google AdWords conversion tracking and other Google Ads services are analytic instruments, that connect data from the Google AdWords advertising network with actions taken on Naviteq websites. The services are provided by Google Inc. and are subject to the company’s privacy policy. Google’s Privacy Policy
2.2.2. Advertising partners

User data may be employed to customize advertising deliverables, such as banners and any other types of advertisements to promote our services. Sometimes, these marketing deliverables are developed based on user preferences. However, not all personal data is used for this purpose. Some of the services provided by Naviteq may use cookies to identify users. The behavioral retargeting technique may also be used to display advertisements tailored to user preferences and online behavior, including outside Naviteq websites. For more information, please check the privacy policies of the relevant services.

  • Facebook Audience Network (Facebook, Inc.) Facebook Audience Network is an advertising service that helps to monitor and evaluate the efficiency of advertising campaigns launched via Facebook. The service is provided by Facebook, Inc. and is subject to the company’s privacy policy. Facebook’s Privacy Policy
  • Bing Ads (Microsoft Corporation). Bing Ads is advertising for launching and managing advertising campaigns across Bing search and Bing’s partner network. The service is provided by Microsoft Corporation and is subject to the company’s privacy policy. Microsoft Corporation’s Privacy Policy
  • Google AdWords (Google LLC) DoubleClick (Google Inc.) / DoubleClick Bid Manager / Google DoubleClick Google AdWords and Double Click are advertising services that enable efficient interaction with potential customers by suggesting relevant advertisements across Google Search, as well as Google’s partner networks. Google AdWords and Double Click are easily integrated with any other Google services—for example, Google Analytics—and help to process user data gathered by cookies. The services are provided by Google Inc. and are subject to the company’s privacy policy. Google’s Privacy Policy
  • LinkedIn Marketing Solutions / LinkedIn Ads (LinkedIn Corporation) LinkedIn Ads allow for tracking the efficiency of advertising campaigns launched via LinkedIn. The service is provided by LinkedIn Corporation and is subject to the company’s privacy policy. LinkedIn’s Privacy Policy
  • Twitter Advertising / Twitter Conversion Tracking (Twitter, Inc.) The Twitter Ads network allows for tracking the efficiency of advertising campaigns launched via Twitter. The service is provided by Twitter Inc. and is subject to the company’s privacy policy. Twitter’s Privacy Policy
2.2.3. Other widgets and scripts provided by partner third parties

In addition to advertising partners and analytics partners mentioned above, we are using widgets, which act as an intermediary between third-party websites (Facebook, Twitter, LinkedIn, etc.) and our website and allow us to provide additional information about us or our services or authorize you as our website user to share content on third-party websites.

  • Disqus (Disqus, Inc.) is a blog comment hosting service for websites and online communities that use a networked platform. Disqus integration into a corporate blog enables website users to submit a comment to any article posted on the blog after he/she authorizes it into a personal Disqus account. Disqus Privacy Policy
  • WordPress (WordPress.org) is a free and open-source content management system (CMS). WordPress Stats is the CMS’s analytics module, which gathers the following statistics: views and unique visitors, likes, followers, references, location, terms, words, and phrases people use on search engines (e.g., Google, Yahoo, or Bing) to find posts and pages on our website. The service also allows for gathering such data as clicks on an external link, cookies, etc. The service is subject to WordPress’s privacy policy.
  • Twitter Button and Twitter Syndication (Twitter, Inc.) allow you to quickly share the webpage you are viewing with all of your followers. Twitter Syndication enables users to implement a widget, which gathers information about the company’s Twitter profile and tweets. The services are provided by Twitter Inc. and are subject to the company’s privacy policy. Twitter’s Privacy Policy
  • Facebook Social Graph (Facebook, Inc.) is used to implement widgets to get data into and out of the Facebook platform. In our case, this widget is used to enable content sharing and display the number of sharings by Facebook users. The service is provided by Facebook, Inc. and is subject to the company’s privacy policy. Facebook’s Privacy Policy
  • LinkedIn Widgets (LinkedIn Corporation) are a quick way to infuse LinkedIn functionality into our website. We use this widget to enable content sharing and display the number of sharings by LinkedIn users. The service is provided by LinkedIn Corporation and is subject to the company’s privacy policy. LinkedIn’s Privacy Policy
  • OneSignal (OneSignal, Inc) is a push notification service. OneSignal’s Privacy Policy
  • ShareThis (ShareThis, Inc.) is a share button service. ShareThis Privacy Policy

2.3. Purposes and legal basis for data processing

Naviteq is gathering data via this service with a view to improving the development of our products or services. Data gathering is conducted on the basis of our or third party’s legitimate interests, or with your consent.

User data collected allow Naviteq to provide our Services and is employed in a variety of our activities that correspond our legitimate interests, including:

  • enabling analytics to draw valuable insights for smart decision making
  • contacting users
  • managing a user database
  • enabling commenting across the content delivered
  • handling payments
  • improving user experience (e.g., delivering highly personalized content suggestions) and the services delivered (e.g., a subscription service), etc.
  • providing information related to the changes introduced to our Customer Terms of Service, Privacy Policy (including the Cookie Policy), or other legal agreements

2.4. Data retention period

We set a retention period for your data — collected from our websites — to 1 year. We gather data to improve our services and the products we deliver. The retention period from our partners is set forth by them in their privacy policies.

2.5. Data recipients

We do not transfer the gathered data to third parties, apart from the cases described in the General data processing section or in this Section, as well as cases stipulated in our third partner’s privacy policies.

3. Data we gather from our web forms

3.1. We collect the following categories of data

When you fill out any of the forms located at our websites, you share the following information with us:

  • Name/surname
  • Position
  • Phone number
  • E-mail
  • Location
  • Company name
  • Any other information you provided to us from your request

3.2. How we process the data gathered

The information about the request is transferred to our CRM or Hubspot. Later, it may be used to contact you with something relevant to your initial request, provide further information related to the topic you requested, and deliver quality service.

By sharing personal information with us, you are giving consent for us to rightfully use your data for the following business purposes:

  • Send any updates regarding services you have shown interest in or provide further information related to the topic you requested.
  • Contact and communicate with you regarding your initial request. To get your consent to further contact you regarding any other services you might be interested in.
  • To get your consent to further contact you regarding any other services you might be interested in.
  • Maintenance and support activities of our CRM system and related activities.

All the information gathered via contact forms is processed by the following services:

  • WordPress (Privacy Policy)
  • Hubspot (Privacy Policy)
  • Gmail services that deliver notifications about the filled out contact forms to our employees (Privacy Shield)

3.3. Purposes and legal basis for data processing

If you fill out a contact form to get an expert’s take on your project or to get familiar with the services our company delivers, we process your data in order to enter into a contract and comply with our contractual obligations (to render Services), or answer to your request. This way, we may use your personal information to provide services to you, as well as process transactions related to the services you inquired about from us. For example, we may use your name or an e-mail address to send an invoice or to establish communication throughout the whole service delivery life cycle. We may also use your personal information you shared with us to connect you with other of our team members seeking your subject matter expertise. In case you use multiple services offered by our company, we may analyze your personal information and your online behavior on our resources to deliver an integrated experience. For example, to simplify your search across a variety of our services to find a particular one or to suggest relevant product information as you navigate across our websites.

With an aim to enhance our productivity and improve our collaboration—under our legitimate interest—we may use your personal data (e.g., an e-mail, name, job title, or activity taken on our resources) to provide the information we believe may be of interest to you. Additionally, we may store the history of our communication for the legitimate purposes of maintaining customer relations and/or service delivery, as well as we may maintain and support the system, in which we store collected data.

If you fill out contact forms for any other purpose, including the download of white papers or to request a demo, we process data with a legitimate interest to prevent spam and restrict the direct marketing of third-party companies. Our interactions are aimed at driving engagement and maximizing the value you get through our services. These interactions may include information about our new commercial offers, white papers, newsletters, content, and events we believe may be relevant to you.

3.4. Data retention period

We set a retention period for your data collected from contact forms on our websites to 1 year. This data may be further used to contact you if we want to send you anything relevant to your initial request (e.g., updated information on the white papers you downloaded from our websites).

3.5. Data recipients

We do not transfer data to third parties, apart from the cases described in the General data processing section and this section.

4. Data we gather from our web forms

4.1. We collect the following categories of data

When you answer a question and/or provide information via chatbot, you share the following information with us:

  • Name/surname
  • Position
  • Phone number
  • E-mail
  • Location
  • Company name
  • Any other information you provided to us from your request

4.2. How we process the data gathered

The information gathered is transferred to our CRM or Hubspot. Later, it may be used to contact you with something relevant to your initial request, provide further information related to the topic you requested, and deliver quality service.

By sharing personal information with us, you are giving consent for us to rightfully use and process in any way your data, including for the following business purposes:

  • Send any updates regarding services you have shown interest in or provide further information related to the topic you requested.
  • Contact and communicate with you regarding your initial request.
  • To get your consent to further contact you regarding any other services you might be interested in.
  • Maintenance and support activities of our CRM system and related activities, etc.

All the information gathered via chatbot is processed by the following services:

  • WordPress (Privacy Policy)
  • Gmail services that deliver notifications about the filled out contact forms to our employees (Privacy Shield)
  • Drift.com, Inc. (Privacy Policy)

4.3. Purposes and legal basis for data processing

If you share personal data via chatbot to get an expert’s take on your project or to get familiar with the services our company delivers, we process your data in order to enter into a contract and to comply with our contractual obligations (to render Services), or answer to your request. This way, we may use your personal information to provide services to you, as well as process transactions related to the services you inquired from us. For example, we may use your name or an e-mail address to send an invoice or to establish communication throughout the whole service delivery life cycle. We may also use your personal information you shared with us to connect you with other of our team members seeking your subject matter expertise. In case you use multiple services offered by our company, we may analyze your personal information and your online behavior on our resources to deliver an integrated experience. For example, to simplify your search across a variety of our services to find a particular one or to suggest relevant product information as you navigate across our websites.

With an aim to enhance our productivity and improve our collaboration—under our legitimate interest—we may use your personal data (e.g., an e-mail, name, job title, or activity taken on our resources) to provide information we believe may be of interest to you. Additionally, we may store the history of our communication for the legitimate purposes of maintaining customer relations and/or service delivery, as well as we may maintain and support the system, in which we store collected data.

If you share personal data via chatbot for any other purpose we process data with a legitimate interest to prevent spam and restrict direct marketing of third-party companies. Our interactions are aimed at driving engagement and maximizing value you get through our services. These interactions may include information about our new commercial offers, white papers, newsletters, content, and events we believe may be relevant to you.

4.4. Data retention period

We set a retention period for your data collected from communication with us via chatbot to 6 years. This data may be further used to contact you if we want to send you anything relevant to your initial request (e.g., updated information on your initial request, etc).

4.5. Data recipients

We do not transfer data to third parties, apart from the cases described in the General data processing section and this section.

5. Data we gather via e-mails, messengers, widgets, and phones

5.1. We collect the following categories of data

When you interact with us via any other means and tools, we gather the following information about you:

  • Name/surname
  • Position
  • Phone number
  • E-mail
  • Location
  • Company name
  • Any other information you provided to us from your request

The information about a customer call is stored in our internal system and includes a full call recording (starting the moment a connection was established), a voice recording if any available, a phone number, and a call duration.

5.2. How we process the data gathered

All the requests acquired via e-mail are stored within a business Gmail account of Naviteq located at the Google’s server. The information about the request is further transferred and stored in internal CRM either by employees of Naviteq manually or automatically for further processing according to our purposes. We may maintain and support the system, in which we store collected data.

5.3. Purposes and legal basis for data processing

When you contact us via any other means to get an expert’s take on your project / our services or to make any kind of a request, we process your data in order to enter into a contract, to comply with our contractual obligations (to render Services), or answer to your request.

This way, we may use your personal information to provide services to you, as well as process transactions related to the services you inquired from us. For example, we may use your name or an e-mail address to send an invoice or to establish communication throughout the whole service delivery life cycle. We may also use your personal information you shared with us to connect you with other of our team members seeking your subject matter expertise. In case you use multiple services offered by our company, we may analyze your personal information and your online behavior on our resources to deliver an integrated experience. For example, to simplify your search across a variety of our services to find a particular one or to suggest relevant product information as you navigate across our websites. With an aim to enhance our productivity and improve our collaboration, what is our legitimate interest, we may use your personal data—such as an e-mail, name, job title, or activity taken on our resources—to provide information we believe may be of interest to you. Additionally, we may store the history of our communication for the legitimate purposes of maintaining customer relations and/or service delivery.

If you communicate with us for any other purpose we process data with a legitimate interest to prevent spam and restrict direct marketing of third-party companies. Our interactions are aimed at driving engagement and maximizing value you get through our services. These interactions may include information about our new commercial offers, white papers, newsletters, content, and events we believe may be relevant to you or your initial request.

5.4. Data retention period

We set a retention period for the data collected to 6 years. This data may be further used to contact you if we want to send you anything relevant to your initial request.

5.5. Data recipients

We do not share data with third parties, apart from the cases described in the General data processing section and cases stipulated in our third partner’s privacy policies.

6. Data we gather if you are our customer

6.1. We collect the following categories of data

If you are our customer, you have already shared the following information with us to process:

  • Names/surnames of contact persons
  • Positions
  • Phone numbers
  • E-mails
  • Skype IDs
  • Company name/address
  • Any other information you provided to us during service delivery
  • History of our communication, etc.

6.2. How we process the data gathered

  • Information about the existing customers is transferred to our internal CRM (by our employees manually or automatically on receiving a contact form) and Hubspot (HubSpot, Inc. Privacy Policy) for further processing a customer request and providing relevant services, as well as developing recommendations on improving the services we deliver. We may further need any maintenance and support activities of our CRM system or any related activities.
  • To share contact information and information related to the services a customer is interested in, we may use the following messengers: Skype (Privacy Policy), Viber (Privacy Policy), WhatsApp (Privacy Policy), or Telegram (Privacy Policy), as well as e-mail services—Gmail (Privacy Policy) or Outlook (Privacy Policy)
  • To store and share project requirements or any other information submitted by a customer (e.g., a project budget estimation to deliver a suitable commercial offer, UI mockups submitted by a customer, test access to a customer system, etc.), we may use services of Google (Privacy Policy), Adobe (Privacy Policy), Microsoft Office (Privacy Policy), Atlassian (Privacy Policy), and Trello (Privacy Policy)
  • To provision phone calls in a distributed manner, Naviteq makes use of services to store historical data about the activities conducted.
  • To establish internal business processes within our departments and teams and to ensure timely request processing, we make use of Trello (Privacy Policy) and Atlassian (Privacy Policy). These services may store project information related to a technology stack, budget, roadmap, deadlines, Naviteq project team, etc.
  • To store the audio recordings of negotiations with a customer in order to clarify details if necessary and conduct meetings with previous, existing, and potential customers, we make use of GoToMeeting (Privacy Policy), and Hangouts (Privacy Policy), or Zoom (Privacy Policy).
  • To store case studies, describing a delivered project approved by a customer, we use an internal web portal—SharePoint Portal (Privacy Policy)—which only employees of Naviteq can access.
  • To provision contracts, all the data about the active customers is stored in a secured internal network resource with limited access. This resource is available only to our account managers or other employees concerned for the purpose of improving service delivery while establishing communication with a customer, issuing an invoice, and generating reports for a customer. Additional services Naviteq uses for issuing invoices Azets AS (Privacy Policy). These services process data in compliance with the privacy policies of the mentioned services.
  • Additionally, by sharing with us this information you are giving consent to contact you in order to get your consent for the possibility to contact you regarding any other services you might be interested in

6.3. Purposes and legal basis for data processing

We use personal data submitted for the following purposes:

To fulfill/comply with our contractual obligations or answer your request. For example, we use your name or an e-mail in contact to send invoices or communicate with you at any stage of the service delivery life cycle. This way, we may use your personal information to provide services to you, as well as process transactions related to the services you inquired from us. For example, we may use your name or an e-mail address to send an invoice or to establish communication throughout the whole service delivery life cycle. We may also use your personal information you shared with us to connect you with other of our team members seeking your subject matter expertise. In case you use multiple services offered by our company, we may analyze your personal information and your online behavior on our resources to deliver an integrated experience. For example, to simplify your search across a variety of our services to find a particular one or to suggest relevant product information as you navigate across our websites.

With an aim to enhance our productivity and improve our collaboration, what is our legitimate interest, we may use your personal data—such as an an e-mail, name, job title, or activity took on our resources — to provide the information we believe may be of interest to you and communicate with you in order to get your consent for a possibility to contact you regarding any other services you might be interested in. Additionally, we may store the history of our communication for the legitimate purposes of maintaining customer relations and/or service delivery as well as to maintain and support our CRM system and related activities.

6.4. Data retention period

We set the retention period for your data about our customer to 1 year from last Service delivery. We keep it to be able to reach you when we have something relevant to your initial request (for example, updated information on related services, news, events, updates, etc).

6.5. Data recipients

We do not share data with third parties, apart from the cases described in the General data processing section or in this section.

7. Data we gather from the attendees of our events

7.1. We collect the following categories of data

When you register or attend an event organized by Naviteq, you share the following information with us:

  • Names/surnames of contact persons
  • Positions
  • Phone numbers
  • E-mails
  • Skype IDs
  • Company name/address
  • Any other information you provided to us during service delivery
  • History of our communication, etc.

7.2. How we process the data gathered

Data about users who filled out a contact form is stored in our internal CRM, which shall be maintained and supported, and Hubspot (HubSpot, Inc. Privacy Policy) — by our employees manually or automatically on receiving a contact form — for further processing a customer request and providing relevant services, as well as developing recommendations on improving the services we deliver.

To share contact information, as well as information related to the events and services that may be of interest to a customer, Naviteq may use the following:

  • Messengers: Skype (Privacy Policy), Viber (Privacy Policy), WhatsApp (Privacy Policy), or Telegram (Privacy Policy)
  • E-mail services Gmail (Privacy Policy) or Outlook (Privacy Policy)
  • Social media platforms: LinkedIn (Privacy Policy)
  • VOIP phone and conferencing services: GoToMeeting (Privacy Policy), Hangouts (Privacy Policy) or Zoom (Privacy Policy).

To provide users with the possibility to register for an event organized by Naviteq and acquire tickets, we use Eventbrite (Privacy Policy).

To store and share information about attendees of the events organized by Naviteq, as well as to improve all the online activities related to such events, Naviteq makes use of the services of Google (Privacy Policy) and Microsoft (Privacy Policy)

To enable marketing activities and share information about relevant services provided by our company, we use remarketing and advertising instruments available through Google Adwords (Privacy Policy).

To build a strong community around the events organized by Naviteq and to interact with those interested in our services, we use Meetup.com (Privacy Policy).

To optimize internal processes and improve communication channels, we may use Atlassian (Privacy Policy) and Trello (Privacy Policy).

7.3. Purposes and legal basis for data processing

To establish efficient communication with customers about our services, we may use the following data:

  • To fulfill and comply with our contractual obligations or answer to your request. To maintain contract development, we use your contact data to send transactional information via e-mail, Skype, or any other communication means or services. Your contact data is also used to confirm your request, respond to any of your questions, inquiries, or requests, provide support, as well as send you any updates on the services we deliver.
  • To fulfill our legitimate interest, we use your contact information and information about your interaction with our services to send promotional materials that we find relevant to you via e-mail, Skype, or any other communication means or services. Our interactions are aimed at driving engagement and maximizing the value you get through our services. These interactions may include information about our new events, commercial offers, newsletters, content, and events we believe may be relevant to you. To fulfill our legitimate interest, we use your contact information which is stored at our CRM system in order to maintain and support our CRM system and carry on any related activities.

7.4. Data retention period

We set the retention period for your data about our customer to 6 years from the last event you have been registered. We keep it to be able to reach you when we have something relevant to your initial request (for example, updated information on calls, e-mail, etc.).

7.5. Data recipients

We do not share personal data with third parties, apart from the cases, which implies Naviteq is to provide a list of registrars to the organizer of the event with a view to ensuring an acceptable level of organization and security.

8. General data processing and data storage

Our processing means any operation or set of operations that is performed on personal data or on sets of personal data, such as collection, recording, organization, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction, support, maintenance, etc.

The retention period of storing data varies on its type. As the retention period expires, we either delete or anonymize personal data collected. In case data was transferred to backup storage and, therefore, cannot be deleted, we continue to store it in a secure fashion, but do not use it for any purpose. In all the other cases, we proceed with the deletion of data.

The information available through our websites that was collected by third parties is subject to the privacy policies of these third parties. In this case, the retention period of storing data is also subject to the privacy policies of these third parties.

To prevent spam, we keep track of spam and swindler accounts, which may be blocked through filtering at the server level.

A request containing words, which may be treated as spam-related or which may promote the distribution of misleading information, are filtered at the server level, as well as by company employees manually.

Data storage on our servers, as well as on cloud services provided by Google, Amazon, Hubspot, and on other services, inter alia Drift.com or other stipulated in this policy.

We do not make automated decisions, including profiling.

9. Your rights

Below, you will find a list of the rights you are subject to. Please note that some of the enlisted rights may be limited for the requests, which expose the personal information of another individual who is subject to the very same rights for privacy. In such a case, we will not be able to satisfy your request for data deletion if it contains information we are eligible to keep by law.

The right to be informed and to access information. You have legal rights to access your personal data, as well as request if we use this data for any purpose. Complying with our general policy, we will provide you with a free copy of your personal information in use within a month after we receive your request. We will send your information in use via a password-protected PDF file. For excessive or repeated requests, we are eligible to charge a fee. In case of numerous or complex requests, we are eligible to prolong our response time by as much as two additional months. Under such circumstances, you will be informed about the reasons of these extensions. In case, we refuse to address a particular request, we will explain why it happens and provide you with a list of further actions you are eligible to proceed. If shall you wish to take further action, we will require two trusted IDs from you to prove your identity. You may forward your requests to our Data Protection Officer ([email protected]). Please provide information about the nature of your request to help us process your inquiry.

The right for rectification. In case you believe, we store any of your personal data, which is incorrect or incomplete, you may request us to correct or supplement it. You also have the right to introduce changes to your information by logging into your account with us.

The right to erase, or “the right to be forgotten”. Under this principle, you may request us to delete or remove your personal data if there is no solid reason for your data continued processing. If you would like us to remove you from our database, please e-mail [email protected]). The right to be forgotten may be brought into force under the following reasons:

  • Data, which no longer has a relation to its original purpose for the collection.
  • You withdraw consent with respect to the original reason data was processed, and there is no other reason for us to continue to store and process your personal data.
  • You have objections to processing your personal data, and there are no overriding legitimate reasons for us to continue to process it.
  • Your personal data has been unlawfully processed.
  • Your personal data has to be deleted to comply with a legal obligation in a European Union or a Member State law to which Naviteq is subject.
  • Your personal data has been collected in relation to the offer of information society services.

The right to restrict processing. Under this right, you may request us to limit the processing your personal data. In this regard, we are eligible to store information that is sufficient to identify which data you want to be blocked, but cannot process it further. The right to restrict processing applies to the following cases:

  • Where you contest the accuracy of your personal data, we will restrict data processing until we have verified the accuracy of your personal data.
  • Where you have objected to data processing under legitimate interests, we will consider whether our legitimate interests override yours.
  • When data processing is unlawful, and you oppose data deletion and request restriction instead.
  • If we no longer need your personal data, but you require this data to establish, exercise or defend a legal claim.

If we have disclosed your personal data in question to third parties, we will inform them about the restriction on data processing, unless it is impossible or involves disproportionate effort to do so. We will inform you if we decide to lift a restriction on data processing.

The right to object. You are eligible to object to processing your personal data based on legitimate interests (including profiling) and direct marketing (including profiling). The objection must be on “grounds relating to his or her particular situation.” We will inform you of your right to object in the first communication you receive from us. We will stop processing your personal data for direct marketing purposes, as soon as we receive an objection.

The right to data portability. You are eligible to obtain your personal data, which is processed by Naviteq, to use it for your own purposes. It means you have the right to receive your personal data — that you have shared with us—in a structured machine-readable format, so you can further transfer the data to a different data controller. This right applies in the following circumstances:

  • Where you have provided the data to Naviteq.
  • Where data processing is carried out because you have given Naviteq your consent to do so.
  • Where data processing is carried out to develop a contract between you and Naviteq.
  • Where data processing is carried out automatically. (No membership data is processed using automated means, so this right does not apply).

Withdrawal of consent. If we process your personal data based on your consent (as indicated at the time of collection of such data), you have the right to withdraw your consent at any point in time. Please note, that if you exercise this right, you may have to then provide your consent on a case-by-case basis for the use or disclosure of certain personal data, if such use or disclosure is necessary to enable you to utilize some or all of our services.

Right to file a complaint. You have the right to file a complaint about manipulations applied to your data by Naviteq with the supervisory authority of your country or a European Union Member State.

10. Data security and protection

We use data hosting service providers in the United States and Ireland to store the information we collect, and we do use extra technical measures to secure your data.

These measures include without limitation: data encryption, password-protected access to personal information, limited access to sensitive data, encrypted transfer of sensitive data (HTTPS, IPSec, TLS, PPTP, and SSH) firewalls and VPN, intrusion detection, and antivirus on all the production servers.

The data collected by third-party providers is protected by them and is subject to their terms and privacy policies.

The data collected on our websites by Naviteq, as well as the data, which you entrust us under NDAs and contracts, is protected by us. We follow the technical requirements of GDPR and ensure security standards are met without exception.

Though we implement safeguards designed to protect your information, no security system is impenetrable and due to the inherent nature of the Internet, we cannot guarantee that data is absolutely safe from intrusion by others during transmission through the Internet, or while stored on our systems, or otherwise in our care.

11. Data transfer outside EEA

We collect information worldwide and primarily store this information in the United States and Ireland. We transfer, process, and store your information outside of your country of residence across regions wherever we or our third-party service providers operate for the purpose of delivering our services to you and for maintenance and support purposes. Whenever we transfer your information, we take precautionary measures to protect it. Thus, the data by third-party providers may be transferred to different countries globally for processing. These data transfers fall under the terms and privacy policies of these providers and (or) under standard data protection clauses.

The data collected by Naviteq may be transferred across our offices. Headquartered in Israel.

12. General description

We may supplement or amend this policy by additional policies and guidelines from time to time. We will post any privacy policy changes on this page. We encourage you to review our privacy policy whenever you use our services to stay informed about our data practices and the ways you can help to protect your privacy.

Our services are not directed to individuals under 16. We do not knowingly collect personal information from individuals under 16. If we become aware that an individual under 16 has provided us with personal information, we will take measures to delete such information.

If you disagree with any changes to this privacy policy, you will need to stop using our services.

Contact us

Your information is controlled by Naviteq Ltd. Israel If you have questions or concerns about how your information is handled, please direct your inquiry to Naviteq Ltd. Israel, which we have appointed as responsible for facilitating such inquiries.

Naviteq Ltd. Israel:

Israel, Tel Aviv, Alon Building 1, Yigal Alon St 94, Tel Aviv-Yafo

Phone/fax: +972 (58) 4448558

E-Mail: [email protected]