This is a recommendation on how to organize subnets inside an AWS VPC. Before continuing, it’s important to understand the difference between a public and private subnet.
There are three broad classes of subnets to run inside your VPC:
Internal subnets aren’t really a thing, but it’s a convenient way to talk about a route table configuration described below.
Here’s a terraform module for a VPC configured as described here.
For a web app, public subnets are the primary entry point for client traffic via load balancers. It’s extremely likely that no real EC2 instances will run in public subnets unless the app in question is doing its own load balancing, not using AWS load balancers.
No incoming traffic to the application? Then maybe public subnets are not required.
Actual application servers run in private subnets and talk to the broader internet via a NAT Gateway. The network interfaces in private subnets may send traffic to the outside world via a connection to a load balancer or something similar, but they generally won’t allow the outside world to connect to them directly and send their traffic through a NAT.
The NAT servers here are a huge advantage if interacting with private, third-party services where IP whitelisting comes in play. They get an elastic IP so any traffic from application servers will always come from the same source regardless of how many of those servers are removed or spun back up.
It may seem like IP whitelisting isn’t something that happens a lot, but it does happen quite often in the advertising space. Even big things like the Google hotel ads API utilize whitelisting at the time of writing.
AWS managed services like ElastiCache or RDS should be placed in private subnets that cannot talk to the outside internet. Servers running pre-build AMIs could also be placed here.
If an application doesn’t need to talk to the outside world at all or only talks to it via a load balancer, private subnets could be skipped all together in favor of internal subnets.
As mentioned above: there’s no such thing in AWS terms as a internal subnet. It’s a convenient way to describe a subnet whose route table only allows outgoing traffic within the VPC.
In practice this tends to work well for managed services, but EC2 instance may need to talk to the outside world for things like NTP.
Availability Zones (AZs)?
It’s generally a good idea to start smaller here. Run subnets of each type required in one or two availability zones. Keep in mind that data transfer within a single AZ is free, but across AZs in a region does cost money.
If the goal is to minimize costs for a small application, then start with one AZ. It’s much easier to add subnets in new availability zones than it is to remove them. Things like distributed database can lead to some fun billing surprises when run across multiple availability zones.
How May Subnets?
It’s not really about how many subnets should be run, but how many hosts will be running in them. Though if the goal is multiple availability zones, at least two subnets are needed.
If the number of hosts is in the thousands or tens of thousands, then multiple IPv4 subnets may be required. AWS has IPv6 available in VPCs, however, which may change the equation.