February 2, 2024 By Ben Ball 3 min read

There are many reasons to move to a managed DNS platform, but they all revolve around a central theme. Once you reach a critical mass of traffic and start getting concerned about the performance and reliability of what you’re delivering, it’s time to consider a managed DNS solution.

There are several well-known options out there, and to a newcomer they can appear relatively similar at first. Every managed DNS provider offers a 100% uptime SLA through a global anycasted DNS network. They all have failover options, which can improve resilience. They all provide dashboards and metrics so you can analyze performance. They all charge based on usage.

Yet underneath these table-stakes features, you’ll find some significant differences. The approach different companies take will ultimately impact the performance, scale, and capabilities of your network. It’s important to know which of these features and capabilities are important to you before comparing options.

As you’re assembling your list of “must haves”, we put together a few questions that can help build out your list of requirements.

1. What’s your risk profile?

Any managed DNS provider worth its salt will offer a 100% uptime SLA. Yet even that might not be good enough. Network outages happen, and on occasion even a highly resilient global network faces availability issues. 

Having a redundant failover option usually makes sense, particularly for “always on” services that truly require high availability. In some cases, that means signing up with more than one provider. NS1 takes a different approach, offering a separated redundant system that you can manage from the same control plane. 

There’s also the question of how your managed DNS provider actually delivers resilience. The mechanics of failover matter. Is it automated? Is it customizable? How many options do you have? How easy is it to manage those options? Even the most rock solid redundant DNS option may be worthless if there’s a hiccup in the failover process.

2. What do your developers need?

Most organizations start using a managed DNS solution to improve the experience of customers and end-users. Then they discover that there’s another audience: developers.

Today’s networks are driven by DevOps, edge computing, and serverless architectures, all of which require an API-first approach to infrastructure. Connections to tools like Terraform are also an important requirement for developers as they leverage network infrastructure to build customer-facing services.

When assessing managed DNS solutions, it’s important to investigate the breadth and depth of their API offering and connections to standard tools used by developers. It’s not enough for APIs to simply be available, they should also be well-documented and easy to use.

3. How will you manage traffic among multiple CDNs and/or clouds?

If you’ve got enough critical mass to need a managed DNS solution, at some point you’ll probably start using multiple clouds or CDNs to deliver applications and content. That means distributing traffic to different providers to optimize performance and improve resilience.

It’s common for a managed DNS provider to offer some form of traffic steering, but there are significant differences in how they operate. You’ll want to see how easy it is to use the traffic steering function in a managed DNS solution. How much manual effort is involved in configurations and deployments?

It’s also important to look at your ability to customize traffic steering options. Will you get the targeted results you’re looking for with the options that are available? Or are the traffic steering options too thin to produce the performance that you really need? Will you use traffic steering for basic load balancing and failover functions, or do your needs go deeper?

4. How important is performance?

For delivery of most applications and services, the speed of most managed DNS services is enough to get the job done. A few milliseconds faster or slower than the average industry benchmarks won’t really matter.

Yet there are specific use cases – streaming video and gaming in particular – where those milliseconds can directly impact revenue. In these cases, it’s important to pay particular attention to both the speed of network responses and the depth of traffic steering options.

Since most high-performance applications and services will use multiple clouds and/or CDNs, the ability to automatically steer traffic to the best performing service is critical. You may also want to weigh performance against factors like cost or reliability – another reason to prioritize solutions with customizable traffic steering options.

If you’re delivering content to mainland China, optimizing performance requires special attention to deployment geography. Its unique network architecture requires a managed DNS solution with a local presence.

Learn more about IBM NS1 Connect’s Managed DNS solution.
Was this article helpful?
YesNo

More from Cloud

Enhance your data security posture with a no-code approach to application-level encryption

4 min read - Data is the lifeblood of every organization. As your organization’s data footprint expands across the clouds and between your own business lines to drive value, it is essential to secure data at all stages of the cloud adoption and throughout the data lifecycle. While there are different mechanisms available to encrypt data throughout its lifecycle (in transit, at rest and in use), application-level encryption (ALE) provides an additional layer of protection by encrypting data at its source. ALE can enhance…

Attention new clients: exciting financial incentives for VMware Cloud Foundation on IBM Cloud

4 min read - New client specials: Get up to 50% off when you commit to a 1- or 3-year term contract on new VCF-as-a-Service offerings, plus an additional value of up to USD 200K in credits through 30 June 2025 when you migrate your VMware workloads to IBM Cloud®.1 Low starting prices: On-demand VCF-as-a-Service deployments begin under USD 200 per month.2 The IBM Cloud benefit: See the potential for a 201%3 return on investment (ROI) over 3 years with reduced downtime, cost and…

The history of the central processing unit (CPU)

10 min read - The central processing unit (CPU) is the computer’s brain. It handles the assignment and processing of tasks, in addition to functions that make a computer run. There’s no way to overstate the importance of the CPU to computing. Virtually all computer systems contain, at the least, some type of basic CPU. Regardless of whether they’re used in personal computers (PCs), laptops, tablets, smartphones or even in supercomputers whose output is so strong it must be measured in floating-point operations per…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters