January 5, 2026

DNS Africa Resource Center

..sharing knowledge.

Streamline DNS management for AWS PrivateLink deployment with Amazon Route 53 Profiles | Amazon Web Services – AWS Blog


For large enterprises adopting AWS PrivateLink interface endpoints, the key challenges revolve around streamlining deployment processes, minimizing the number of endpoints, and optimizing costs at scale. A proven approach to address these challenges is using AWS Transit Gateway alongside Amazon Route 53 Resolver, enabling the efficient sharing of AWS PrivateLink interface endpoints across multiple Amazon Virtual Private Cloud (VPCs) and on-premises environment. It allows enterprises to minimize the number of required interface endpoints, resulting in cost savings and lower operational overhead.
PrivateLink facilitates private connectivity between your VPC and supported AWS services, software as a service (SaaS) applications, or third-party services hosted on AWS or on-premises. PrivateLink uses VPC Interface Endpoints, which establish secure connections between your VPC and the target service. However, as organizations expand and introduce more VPCs and accounts, deploying these Interface Endpoints across thousands of VPCs, especially in multi-account environments, can become increasingly complex and costly.
Amazon Route 53 Profiles provides a new opportunity to revisit this architecture and enhance it even further. Integrating Route 53 Profiles allows you to simplify and centralize DNS management across a vast number of VPCs across multiple AWS accounts, making your PrivateLink deployment more scalable.
In this post, we show you how PrivateLink enables secure, private connectivity between your VPCs, whether they are within the same account, across multiple accounts, or integrated with on-premises environments, and AWS services. Whether you’re scaling your infrastructure or optimizing your architecture, this post provides a practical, step-by-step guide to mastering PrivateLink deployments.
Adopting a centralized deployment of PrivateLink in a hub and spoke model addresses the challenges associated with scaling PrivateLink across numerous VPCs and accounts. In this set up shown in Figure 1, PrivateLink VPC endpoints are centralized and deployed within a Shared Services VPC. Spoke VPCs in Dev and Prod accounts can access these centralized endpoints by connecting to Shared Services VPC through a Transit Gateway or AWS Cloud WAN. An on-premises data center can access these centralized PrivateLink VPC endpoints by establishing hybrid connectivity with the AWS environment through AWS Direct Connect or AWS Site-to-Site VPN.
Figure1-Centralized-VPC-endpoint-in-a-Shared-Services-VPC.png
Figure 1: Centralized VPC endpoint in a Shared Services VPC
DNS management is a critical component when implementing a centralized deployment model. When creating a VPC Interface Endpoint for any PrivateLink enabled service, you have the option to enable private DNS by choosing the Enable DNS name option during the endpoint set up process. Enabling this feature creates an AWS-managed which resolves the public DNS name of the AWS service to the private IP address of the VPC Endpoint. However, this managed PHZ is only accessible within the hub VPC that hosts the VPC Endpoint and can’t be shared with other spoke VPCs. To overcome this, we use custom PHZ, which we discuss in the following section.
For VPC-to-VPC and on-premises connectivity, we start with disabling private DNS for the VPC endpoint.
Figure2-DisablePrivateDNS-1
Figure 2: Modify private DNS name
After you have disabled Enable private DNS names, then you can create a Route 53 PHZ. You use the service endpoint name and configure an alias record that points to the AWS service’s VPC endpoint name.
Figure3-AliasRecord
Figure 3: Create Route 53 alias record
In this example, we are creating an endpoint for AWS Lambda in the us-east-1 AWS Region, thus the endpoint ends with lambda.us-east-1.vpce.amazonaws.com.
When this custom PHZ is created in the hub VPC, you can associate it with other spoke VPCs. This approach makes sure that all spoke VPCs can resolve the AWS service’s public DNS name to the private IP address of the endpoint, enabling seamless connectivity across multiple VPCs.
Typically, to enable DNS resolution for VPC Endpoints across multiple VPCs, you would need to manually associate the PHZ for each VPC Endpoint with every spoke VPC. If both the hub and spoke VPCs reside within the same AWS account, then this association can be performed through the AWS Management Console. However, if the VPCs are in different accounts, then you would need to use the AWS Command Line Interface (AWS CLI) or SDK to complete the association. This process is described in the Route 53 developer guide.
Figure4-Centralized-VPC-endpoint-in-a-Shared-Services-VPC-using-cross-account-PHZ-association
Figure 4: Centralized VPC endpoint in a Shared Services VPC using cross account PHZ association
To streamline this process and make it more scalable, Route 53 Profiles can be used. In the following section, we explore how Route 53 Resolver Profiles can be used to enhance the existing solution.
The architecture diagram in Figure 5 shows a single-Region workload. We have deployed Amazon VPCs named Dev VPC in a Dev account and a Prod VPC in a Prod account. As stated previously, these VPCs are connected using either Transit Gateway or AWS Cloud WAN. This architecture facilitates the use of the VPC endpoint in the Shared Services VPC by Amazon Elastic Compute Cloud (Amazon EC2) instances. These instances, residing in either the Dev VPC or the Prod VPC, can privately access Amazon Kinesis and Lambda.
Figure5-Centralized-VPC-endpoint-in-a-Shared-Services-VPC-using-Route-53-Profiles-for-DNS-resolution
Figure 5: Centralized VPC endpoint in a Shared Services VPC using Route 53 Profiles for DNS resolution
The following steps go through the deployment process and show how Route 53 Profiles streamline this process.
The implementation of VPC endpoints for Kinesis and Lambda mean that all the VPCs can resolve the public DNS names for these services to the corresponding private IP addresses of their respective VPC endpoints. Therefore, all resources within these spoke VPCs can now access Kinesis and Lambda services securely through either Transit Gateway or AWS Cloud WAN. Then do so via the VPC endpoint in the Shared Services VPC, without the need to traverse the public internet.
Moving forward, when you create new VPC endpoint for any other supported AWS services, the only step necessary is to associate the PHZ for each of the VPC endpoints with the centralized Route 53 Profile. When this association is established, all the VPCs linked to this Route 53 Profile can resolve the DNS name to these newly created VPC endpoints.
Similarly, when you provision new VPCs in existing or new accounts, you associate those VPCs with the Shared Route 53 Profile. You also provide layer-three connectivity with Shared Services VPC using Transit Gateway or AWS Cloud WAN. As a result, all the new VPCs automatically become associated with all the PHZs in the Shared Services Account, providing seamless DNS resolution to the respective VPC endpoints.
In this scenario shown in Figure 6, we establish a Layer 3 connectivity between the AWS environment and an external network. On-premises resources are needing to reach AWS services, such as Kinesis and Lambda, so we must implement a solution for an on-premises DNS resolution.
Figure6-Centralized VPC endpoint in a Shared Services VPC using Route 53 Profiles for DNS resolution with on-premises
Figure 6: Centralized VPC endpoint in a Shared Services VPC using Route 53 Profiles for DNS resolution with on-premises
In this post, we discussed how Amazon Route 53 Profile can easily be integrated to help with DNS management when using a centralized model using AWS Transit Gateway or AWS Cloud WAN for AWS PrivateLink deployment. To get started, visit the AWS PrivateLink and Amazon Route 53 Profile page.

Kunj
Kunj is a Technical Account Manager at AWS and is based out of Vancouver, Canada. He has an extensive background in Network and Infrastructure engineering prior to this role. He is passionate about new technologies and enjoys helping customers build, implement, and optimize their cloud infrastructure on AWS.

Salman Ahmed
Salman Ahmed is a Senior Technical Account Manager in AWS Enterprise Support. He enjoys helping customers in the travel and hospitality industry to design, implement, and support cloud infrastructure. With a passion for networking services and years of experience, he helps customers adopt various AWS networking services. Outside of work, Salman enjoys photography, traveling, and watching his favorite sports teams.

Ankush Goyal
Ankush Goyal is a Senior Technical Account Manager at AWS Enterprise Support, specializing in helping customers in the travel and hospitality industries optimize their cloud infrastructure. With over 20 years of IT experience, he focuses on leveraging AWS networking services to drive operational efficiency and cloud adoption. Ankush is passionate about delivering impactful solutions and enabling clients to streamline their cloud operations.

source

About The Author