This guide walks through a practical EC2 database host pattern that stores data on an attached EBS volume formatted with XFS, including persistent mount setup and rollback-minded verification. It is a migration and operations checklist based on inherited environments where root volume growth, device naming assumptions, and boot ordering caused avoidable incidents. We cover volume attachment, XFS format, mount persistence, ownership, and runtime checks, with follow-up in the data moves playbook.
Why this pattern still appears
Separating system and data volumes keeps replacement and recovery cleaner. It also reduces pressure to mutate root disks during data growth events.
For current EBS operational guidance, AWS documentation is available at docs.aws.amazon.com.
Step 1: Provision EC2 instance and attach EBS volume
Pick instance type according to database profile and throughput needs. Attach data volume in the same availability zone as the instance.
Step 2: Identify device and confirm block mapping
1 | lsblk -f |
Do not assume fixed device names across all instance families.
Step 3: Format volume with XFS
1 | sudo mkfs.xfs /dev/nvme1n1 |
Run once, and only on the correct target device.
Step 4: Mount and assign ownership
1 | sudo mkdir -p /data/db |
Step 5: Persist mount through reboot
Find UUID and place in fstab:
1 | sudo blkid /dev/nvme1n1 |
Then set:
1 | UUID=<your-uuid> /data/db xfs defaults,nofail 0 2 |
Test before relying on it:
1 | sudo umount /data/db |
Operational caveats
nofailavoids boot hard-stop but can hide mount failure if monitoring is weak.- Database service should fail fast when mount is missing, not silently write to root.
- Snapshot policies should be tested with restore drills, not only scheduled.
Verification after reboot
- Confirm mount is present and type is XFS.
- Confirm database writes hit
/data/db. - Confirm free space and I/O metrics are expected.
FAQ
Why XFS here instead of ext4?
XFS performs well for large files and growth scenarios, but choose based on workload and team familiarity.
Should logs be on the same volume?
Usually separate concerns where possible. Keep critical logs accessible during data-volume incidents.
What breaks most often?
Wrong device mapping in fstab, service startup before mount, and missing post-reboot verification.