Skip to content
Architecture

Overlapping CIDRs in Cross-Account Kubernetes Migrations — Why PrivateLink Resource Endpoints Change Everything

You are migrating microservices between Kubernetes clusters across AWS accounts, but the source uses /16 CIDRs that collide with corporate. The 2022 playbook (Private NAT Gateway + NLB per service) is obsolete. Here is what re:Invent 2024 gave us.

Alexandre Agius

Alexandre Agius

AWS Solutions Architect

6 min read
Share:

Cross-account Kubernetes migration looks easy on the whiteboard and brutal in production. The specific failure mode I want to talk about: two AWS Organizations, both using 10.0.0.0/16, both running dozens of EKS services that need to talk to each other during a months-long migration.

The old playbook — Private NAT Gateway + classic PrivateLink with an NLB per service — works but costs a fortune in NLBs, burns port ranges, and turns every new service into a ticket. Since re:Invent 2024 there is a much better answer: PrivateLink Resource Endpoints with Resource Gateway and Resource Configuration.

This post compares the two approaches on a real migration scenario and shows why the new primitive collapses the complexity.

The Problem

Migration scenario:

  • Source — legacy AWS Organization, multiple accounts, every VPC on 10.x.x.x/16
  • Target — new AWS Organization, every VPC on 10.x.x.x/16 (inherited from corporate)
  • Workload — 40+ microservices on EKS, being migrated one at a time over 6 months
  • Requirement — bidirectional communication during migration (a service in source must be callable from target, and vice versa)
  • Constraint — you cannot renumber either side

Overlapping /16 ranges mean classic VPC peering, TGW attachments, and any form of direct IP routing are off the table.

The 2022 Playbook (Obsolete, But Worth Understanding)

The pre-re:Invent-2024 approach:

  1. Add a secondary non-overlapping CIDR (e.g. 100.64.0.0/16 from RFC 6598) to both VPCs.
  2. Deploy an internal NLB for each service, bound to the secondary CIDR.
  3. Create a classic PrivateLink endpoint service in front of each NLB.
  4. Consumers create interface endpoints into the endpoint service.
  5. Use Private NAT Gateway to translate addresses where needed.

Why it’s painful:

  • One NLB per service — at $16/mo + data processing, 40 services = $640/mo of NLBs alone
  • Port exhaustion — NLBs have limits per target group; dense service meshes hit them
  • Secondary CIDR management — every VPC now has two CIDR blocks; routing tables grow
  • No L4 flexibility — PrivateLink classic is TCP to a single port; no UDP, no multi-port
  • Each new service is a ticket — new NLB, new endpoint service, new interface endpoint, new DNS record

It works. It’s just expensive and slow.

The 2024 Primitive: Resource Endpoints

AWS PrivateLink Resource Endpoints (launched at re:Invent 2024) fundamentally change the data-plane model.

Three new building blocks

ObjectWhat It IsAnalogy
Resource GatewayAn ENI-backed data-plane endpoint in the producer VPCLike an NLB, but managed and no target groups
Resource ConfigurationA reference to a specific resource (IP, ARN, or DNS name) behind the gatewayThe “target” of the gateway
Resource VPC EndpointAn interface endpoint in the consumer VPC that points to the gatewaySame as a classic VPCE, but type Resource

The key differences:

  • No NLB required — the Resource Gateway is the data-plane ingress
  • No endpoint service required — Resource Configuration replaces it
  • Multiple targets per gateway — one gateway can front many resources
  • IP-based or ARN-based — you point at resources directly, not via a load balancer

Architecture Comparison

Old approach (one service):

Consumer Pod → VPC Interface Endpoint → PrivateLink → Endpoint Service

                                                     Internal NLB

                                                      Target Pod

New approach (many services through one gateway):

Consumer Pod → Resource VPC Endpoint → PrivateLink → Resource Gateway → Resource Config A → Pod A
                                                                    → Resource Config B → Pod B
                                                                    → Resource Config C → Pod C

One Resource Gateway fronts the whole cluster. New services register as Resource Configurations — no new infrastructure.

Concrete Migration Pattern

For the source → target EKS scenario:

  1. Create a Resource Gateway in each VPC. One in source, one in target. Pick a subnet in each.
  2. Register each service as a Resource Configuration. Point at the Kubernetes Service’s cluster IP, or at individual Pod IPs if you need fine-grained targeting.
  3. Create Resource VPC Endpoints on the opposite side. Source creates endpoints pointing at target’s gateway, and vice versa.
  4. Use PrivateLink-managed DNS. Consumers resolve my-service.privatelink.internal to the endpoint’s private IP. No Route 53 private zone required.
  5. Migrate services one at a time. Each migrated service flips its DNS; the rest keep using the Resource Endpoint path.

Both VPCs keep their original /16 CIDRs. No secondary CIDR. No Private NAT Gateway (unless you still need outbound overlapping routing for non-PrivateLink traffic).

Cost Comparison (40 services, one direction)

Cost ComponentOld ApproachNew Approach
Load balancers40 × NLB ($16/mo) = $640/mo$0
PrivateLink endpoint services40 × free (in-account)1 × Resource Gateway + 40 × Resource Configs (gateway priced per hour)
Interface / Resource endpoints40 × $7.30/mo = $292/mo1 × Resource Endpoint ≈ $7.30/mo + data
Data processingPaid once (NLB) + once (PrivateLink)Paid once (Resource Gateway)
Monthly total (illustrative)~$932 + data~$200–300 + data

Actual numbers vary by region and traffic — the point is the shape: the new model replaces N load balancers with one gateway, and N endpoint services with N lightweight configurations.

Gotchas You Will Hit

  • TCP only, for now. UDP-heavy workloads (some gRPC streaming, DNS, QUIC) still need another answer.
  • Same-region requirement. Resource Endpoints do not cross regions. For cross-region, stack on Transit Gateway inter-region peering or endpoint services with VPC peering.
  • Directional consumer → provider. The Resource Endpoint model is asymmetric. If you need true bidirectional mesh, you build two paths (A → B and B → A), each with its own gateway + endpoint pair.
  • ARN-based vs IP-based Resource Configs. ARN-based (e.g. RDS endpoint) gives you stable identity; IP-based is cheaper but breaks if the IP moves. For EKS, ARN-based on a Service’s ARN is cleaner than per-Pod IPs.
  • Security-group semantics. Resource Gateway ENIs have SGs. Don’t forget to allow the consumer-side endpoint ENIs; this is where silent drops happen.

When to Still Use the Old Approach

  • Cross-region — Resource Endpoints are single-region only today
  • Non-TCP workloads — UDP, QUIC still need TGW or other paths
  • Existing estate — if you already have 200 NLBs working, don’t migrate them all for sport; migrate the next 50 services with the new pattern

Key Takeaways

  • Overlapping CIDRs are a very common M&A and multi-org problem — plan for them.
  • The 2022 playbook (NAT GW + NLB per service + secondary CIDR) works but does not scale operationally or financially.
  • PrivateLink Resource Endpoints (re:Invent 2024) collapse the model: one Resource Gateway, N Resource Configurations, one Resource Endpoint.
  • Cost-wise, the new pattern removes the NLB line item entirely — and the operational savings are bigger than the infrastructure savings.
  • For cross-account EKS migration with overlapping CIDRs, this is the pattern you want on your whiteboard.

References:

Alexandre Agius

Alexandre Agius

AWS Solutions Architect

Passionate about AI & Security. Building scalable cloud solutions and helping organizations leverage AWS services to innovate faster. Specialized in Generative AI, serverless architectures, and security best practices.

Never miss a post

Get notified when I publish new articles about AI, Cloud, and AWS.

No spam, unsubscribe anytime.

Comments

Sign in to leave a comment

Related Posts

Security

Your Security Team Wants to Privatize Your App — Here's What They Actually Need

When your security team says 'make it private', they usually mean 'make it secure.' This post compares four approaches — VPC privatization, WAF IP allowlisting, CloudFront + auth hardening, and AWS Verified Access — and explains why Zero Trust beats network perimeters for internal applications.

10 min