Security in AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructure, hardware, virtualization layer, facilities, and staff while the subscriber organization – that’s you – is responsible for securing and controlling outbound VPC traffic destined for the Internet. Additionally, many organizations must meet regulatory requirements such as PCI-DSS, which requires organizations to monitor and control outbound traffic to the Internet.
Securing egress traffic to the Internet can be tricky because most EC2 instances need outbound access for basic operations such as software patching and accessing AWS services. Additionally, applications often have legitimate needs to send or receive data to and from third-party services or SaaS applications.
Here’s an overview of some different methodologies for securing VPC egress traffic, including their pros and cons.
Also known as a “trombone” approach, this method routes all VPC egress traffic through your on-prem data center for inspection and filtering using your existing firewall (edge infrastructure) and rules. Because packets must go on-premises to access the Internet, this approach introduces latency and cost. It’s also incompatible with “cloud-first” initiatives.
Because the majority of cloud workloads are server applications (i.e., non-user), the list of hosts, protocols, and ports being accessed by each application is typically known in advance. One simple and cost-effective model for securing egress traffic is to allow traffic only to this list of known, trusted sites, which can be vetted by a security team ahead of each release.
This approach effectively stops a hacker or malware from uploading your data to a nefarious site. It also allows you to keep an eye on where your applications are communicating.
It may seem daunting to do this for all of your applications in the cloud. However, with a VPC-centric approach, you can do this one application or one VPC at a time. Furthermore, discovery tools make the job of building the first trusted list easy.
AWS provides a NAT gateway service and instances to allow your private subnets to reach the Internet. These services offer some ability filter traffic, but with implementation limited to IP addresses in Network ACL or Security Group. The number of IP addresses is limited so this can become challenging to manage. Furthermore, you’ll need to query DNS and update the IP address list regularly.
Instead of placing a firewall in each VPC, you could also send all VPC egress traffic to a single shared-security VPC or multiple regional shared-security VPCs. The primary benefit of this approach is that you’ll need fewer firewalls. However, this comes with the disadvantage of making it difficult to filter traffic differently by application or VPC.
A forward proxy server, such as Squid, can act as an intermediary for requests from internal users and servers, often caching content to speed up subsequent requests. AWS customers often use a VPN or AWS Direct Connect connection to leverage existing corporate proxy server infrastructure, or build a forward proxy farm on AWS using Squid proxy servers with internal Elastic Load Balancing.
While the Squid solution works, it is hard to manage and is limited for cloud VPCs. For starters, there’s limited protocol support. For example, Squid doesn’t support protocols other than HTTP/S. Squid also requires:
What’s more, new instances can appear without reconfiguring Squid, which presents a big security risk.
So, after all is said and done, what should you do to filter and secure your VPC egress traffic?
More organizations are choosing a truly cloud-native approach that deploys lightweight, fully automated gateways in your VPCs to control both inbound and outbound traffic.
With a centrally managed, cloud-native approach:
Filtering VPC egress traffic is important for compliance with the Payment Card Industry (PCI) requirements for how companies securely collect, store, process, and transmit credit card numbers. Two specific PCI requirements apply to AWS and other public cloud customers:
1.2.1: Restrict inbound and outbound traffic to only that which is necessary for the cardholder data environment, and specifically deny all other traffic.
1.3.4: Do not allow unauthorized outbound traffic from the cardholder data environment to the Internet.
Solutions for achieving and maintaining PCI compliance that worked in the datacenter don’t necessarily translate to AWS. The two most common mistakes are 1) to use a traditional, non-cloud, firewall in every VPC and 2) to use a centralized firewall.
Traditional firewalls and even their virtual counterparts are a mistake because they can’t be fully automated in an AWS environment. As a result, they slow performance, cause internal friction between your internal teams, and become cost-prohibitive as your EC2 instance charges and firewall license charges grow.
Centralized firewalls are a mistake because they require traffic to be routed to one or several central VPCs, which increases your operations team’s workloads and VPC connection costs, creating a natural chokepoint on your central VPC, limits the size of the central firewall to the size of your instance, and adds unnecessary egress costs to send traffic from the VPCs to the central VPC.