Skip to content
Architecture

Linux Package Management in AWS China Regions — RHUI, Squid Proxies, and What EPEL Cannot Do

You have RHEL 9 instances in AWS China regions managed via SSM. Installing PostgreSQL 17 or EPEL packages means opening dozens of dynamic URLs through China's restricted network. Here is what actually works in production.

Alexandre Agius

Alexandre Agius

AWS Solutions Architect

7 min read
Share:

AWS China regions (Beijing, Ningxia) have their own network rules. Inside a VPC, you get the usual AWS services. But any package install that hits the public internet — EPEL, PGDG, GitHub release artifacts, pip from PyPI — runs into the same wall: traffic crossing the Great Firewall is filtered, slow, or blocked, and routing dozens of dynamic URLs through it is not going to pass your security review.

If you run RHEL in AWS China and manage it via SSM, this is a problem you will hit within the first month. This post is the practical guide that doesn’t exist in official docs.

The Landscape

What you actually get for free: RHUI

Red Hat Update Infrastructure (RHUI) is Red Hat’s content-distribution network, and AWS runs an RHUI instance in each China region that mirrors:

  • BaseOS — RHEL base packages
  • AppStream — modular RHEL application packages

That’s it. Two repositories, two URLs per region, served by AWS inside the China region. No Great Firewall crossing, no public internet required.

Everything you need that isn’t BaseOS or AppStream — EPEL, PostgreSQL official (PGDG), Docker CE, NVIDIA drivers, anything community — is not in RHUI. You have to reach the public internet to get it.

The misconception that bites everyone

RHUI in China is not served from S3. It sits behind specific regional IPs. This means:

  • ❌ VPC Endpoints for S3 do not help
  • ❌ VPC Endpoints for AWS services do not help
  • ✅ You need a real network path (NAT Gateway, proxy, or mirror)

Teams routinely try to “just add an S3 endpoint” to solve RHUI connectivity and waste days figuring out why it doesn’t work. It doesn’t work because RHUI isn’t on S3.

Solution Ladder — Simplest to Most Complex

Option 1 — Squid proxy in a DMZ subnet (simplest)

The one every team should start with:

  • Deploy a small Squid proxy EC2 instance (t3.small is plenty for a few dozen RHEL instances)
  • Put it in a DMZ subnet with NAT Gateway or public IP
  • Configure Squid to allow a whitelist of destination URLs:
    • RHUI endpoints (regional, stable)
    • EPEL mirror (dl.fedoraproject.org)
    • PGDG mirror (download.postgresql.org)
    • Any other specific repos you need

Config your RHEL instances’ /etc/dnf/dnf.conf (or per-repo config) to route through the proxy:

# /etc/dnf/dnf.conf
[main]
proxy=http://squid-proxy.example.internal:3128

Why this is the best default:

  • One managed egress point — everything else is truly private
  • Whitelist-based — security team approves 3–5 URLs instead of “the public internet”
  • Squid access logs give you a full audit trail of every package URL hit
  • If an approved URL suddenly has a new CDN, you see it in the logs and can whitelist

Gotcha: Squid’s default TLS CONNECT handling works for HTTPS package downloads, but some repos use client certificates (Red Hat RHUI entitlements do). Test RHUI through the proxy specifically — if it fails, configure an explicit bypass for RHUI endpoints since they’re already inside AWS China and don’t need the proxy.

Option 2 — Private repo mirror with reposync / Pulp

For fully air-gapped or high-control environments:

  • Run a private yum/dnf repository server (reposync + createrepo_c, or Pulp)
  • Mirror the external repos once (typically from outside China, then sync inbound via controlled channel)
  • Point all RHEL instances at the internal mirror only — no internet access at all

When it’s worth it:

  • Air-gapped security requirement
  • 100+ RHEL instances where proxy bandwidth is a concern
  • Regulated industry where every package has to be version-pinned and reviewed before deployment

When it’s overkill:

  • Small fleet (< 30 instances)
  • You already have a functional proxy solution
  • Your packages change frequently (every mirror sync is ops work)

Option 3 — Red Hat Satellite (enterprise-only)

Full Red Hat Satellite gives you content views, lifecycle environments, and detailed subscription management on top of a repository mirror. For small-to-medium fleets in China, this is overkill. Satellite licensing alone changes the economics, and the operational complexity is substantial.

Use Satellite if you’re already running it on-prem and want consistency in AWS China. Otherwise, start with Squid.

SSM Patch Manager With Alternative Repositories — Useful But Temporary

SSM Patch Manager can install patches from non-default repositories by enabling them during the patch operation. This is useful for:

  • Monthly patch cycles that need a specific EPEL version
  • One-off security updates from a non-default repo

The catch: SSM enables the repo, runs the patch, and disables the repo. The repo is only available during the patching window. For interactive dnf install by an admin, you still need the proxy or mirror path.

Treat this as a supplement to the Squid proxy, not a replacement.

The RDS Escape Hatch

If your pain is specifically “I need PostgreSQL 17 on RHEL,” consider Amazon RDS for PostgreSQL, which is available in AWS China regions and includes modern PostgreSQL versions with zero repo management.

The trade-off:

FactorSelf-managed PG on RHELRDS for PostgreSQL
Repo managementYouAWS
PatchingYouAWS
Extensions beyond RDS listYes (any)Only the RDS-supported list
CostEC2 + EBSHigher per-hour, but includes HA, backups, monitoring
ControlFullManaged

If you’re installing PostgreSQL purely to run PostgreSQL, RDS is usually the right answer in China — fewer URLs to whitelist, fewer ops hours, more reliable.

Practical Config Snippets

Squid whitelist-based ACL (starter)

# /etc/squid/squid.conf (simplified)
acl rhel_repos dstdomain .dl.fedoraproject.org .download.postgresql.org
http_access allow rhel_repos
http_access deny all
http_port 3128

RHEL yum/dnf config with proxy

# /etc/dnf/dnf.conf
[main]
proxy=http://squid.internal:3128
proxy_username=  # only if auth required
proxy_password=

SSM document that uses a private repo for one run

A custom AWS-RunShellScript document can:

dnf --enablerepo=internal-mirror install -y postgresql17

Where internal-mirror is a pre-configured repo file pointing at your private mirror’s URL. This lets SSM Patch Manager use the internal mirror without permanently enabling it.

Security Review Talking Points

When you propose the Squid solution to security:

  • One egress point. All external dependency traffic goes through one host with one log.
  • Whitelist, not blocklist. Security team approves destinations explicitly.
  • Audit trail. Every package URL is logged with timestamp and source instance.
  • No direct internet. RHEL instances themselves have no public route.
  • RHUI stays private. Base OS patches never leave AWS China.

This is usually a much easier conversation than asking for “internet access on every instance.”

Key Takeaways

  • RHUI in AWS China covers BaseOS + AppStream only. Everything else needs an external path.
  • RHUI is not on S3 — VPC Endpoints do not help. You need a real network path.
  • Default solution: Squid proxy with a whitelist. Simple, auditable, scales to small and medium fleets.
  • Private mirror (reposync / Pulp) is the right choice for large or air-gapped fleets. Satellite is rarely worth it.
  • SSM Patch Manager with alternative repos is a supplement, not a replacement — repos are only available during patching.
  • If your actual need is PostgreSQL, RDS bypasses the repo problem entirely.
Alexandre Agius

Alexandre Agius

AWS Solutions Architect

Passionate about AI & Security. Building scalable cloud solutions and helping organizations leverage AWS services to innovate faster. Specialized in Generative AI, serverless architectures, and security best practices.

Never miss a post

Get notified when I publish new articles about AI, Cloud, and AWS.

No spam, unsubscribe anytime.

Comments

Sign in to leave a comment

Related Posts