Managing infrastructure manually is time-consuming and error-prone. Automating your stack with Terraform not only saves hours of configuration but also ensures consistency across environments. When you need a managed relational database like PostgreSQL, Amazon RDS removes the operational burden while delivering scalability and reliability.
This guide walks you through provisioning a fully automated PostgreSQL database on AWS RDS using Terraform. You’ll configure security groups, subnet groups, and outputs while following infrastructure-as-code best practices. By the end, you’ll have a production-ready RDS instance ready to integrate with your applications.
Why Use Terraform for PostgreSQL on AWS RDS
Modern applications rely on reliable data storage, and PostgreSQL is a top choice for structured data with support for transactions, complex queries, and relationships. While AWS RDS handles provisioning and maintenance, Terraform automates the entire setup process.
Key benefits include:
- Consistency: Infrastructure defined as code reduces configuration drift across environments.
- Reusability: Modules allow you to reuse configurations across projects.
- Security: Centralized credential management with variables or AWS Secrets Manager.
- Scalability: Built-in support for multi-AZ deployments and failover configurations.
For development and testing, db.t3.micro offers a cost-effective starting point before scaling up in production.
Setting Up the RDS Module Structure
Start by creating a dedicated module for your RDS instance. This keeps your Terraform code modular and easier to maintain.
modules/
rds/
main.tf
variables.tf
outputs.tfIn main.tf, define the aws_db_instance resource with essential configurations:
resource "aws_db_instance" "postgres" {
identifier = "postgres-db"
engine = "postgres"
engine_version = "15"
instance_class = "db.t3.micro"
allocated_storage = 20
storage_type = "gp2"
skip_final_snapshot = true
publicly_accessible = true
username = var.database_username
password = var.database_password
db_name = var.database_name
vpc_security_group_ids = [aws_security_group.rds_sg.id]
db_subnet_group_name = aws_db_subnet_group.default.name
}For production, set skip_final_snapshot = false to ensure you retain a final backup before deleting the instance.
Securing Access with Security Groups and Subnet Groups
Security is critical when exposing a database endpoint. Begin by creating a security group that restricts access to PostgreSQL’s default port, 5432:
resource "aws_security_group" "rds_sg" {
name = "rds-sg"
description = "Allow PostgreSQL access from trusted sources"
vpc_id = var.vpc_id
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}AWS also requires a subnet group for high availability. This group must span at least two Availability Zones. If you already have private subnets defined in a network module, reference them directly:
resource "aws_db_subnet_group" "default" {
name = "rds-subnet-group"
subnet_ids = var.subnet_ids
tags = {
Name = "RDS Subnet Group"
}
}Attach both the security group and subnet group to your RDS instance to enforce secure access and availability.
Integrating the Module into Your Terraform Configuration
With the module prepared, reference it in your root configuration. Open main.tf and add:
module "rds" {
source = "./modules/rds"
database_name = "my_app_db"
database_username = "app_admin"
database_password = var.db_password
subnet_ids = module.network.private_subnet_ids
vpc_id = module.network.vpc_id
}Always store sensitive values like passwords in Terraform variables or AWS Secrets Manager:
variable "db_password" {
description = "Password for the RDS PostgreSQL instance"
sensitive = true
}After defining inputs, deploy the infrastructure with:
terraform init
terraform apply --auto-approveSuccessful deployment outputs the database endpoint, which you can use to connect your application.
Next Steps: Scaling and Hardening Your Setup
While this configuration works well for development, production environments require additional considerations:
- Disable
publicly_accessibleand use VPC endpoints or bastion hosts for access. - Enable
skip_final_snapshotonly for testing; disable it in production. - Enable encryption at rest using AWS KMS.
- Monitor performance with CloudWatch metrics and alarms.
Use this foundation to expand your infrastructure with automated backups, read replicas, and CI/CD pipelines.
The future of data infrastructure lies in automation. By treating PostgreSQL provisioning as code, you unlock repeatability, security, and scalability—letting you focus on building applications, not managing servers.
AI summary
Learn how to automate PostgreSQL database provisioning on AWS RDS using Terraform. Includes security groups, subnet groups, and secure credential management.