Qugstart
Journal note about fstab and EBS persistence validation.

EBS mount persistence and boot-order diagnostics

Operational notes on fstab configuration, UUID-based mounts, nofail options, and application service boot ordering for persistent EBS volumes on EC2.

This entry tracks EBS volume persistence diagnostics for EC2 instances where the volume holds application data or a database and the mount must survive reboots and stop/start cycles cleanly.

The fstab problem

The most common EBS persistence failure is a missing or incorrect /etc/fstab entry. When you mount an EBS volume manually with mount, the mount does not persist across reboots. It must be in /etc/fstab.

There is a secondary problem: if fstab references a device name like /dev/xvdf and that device name changes after an instance stop/start cycle, the mount silently fails or, worse, blocks the boot sequence.

Using UUID instead of device name

Reference the volume by UUID rather than device path:

1
2
3
4
5
# Find the UUID of the volume
sudo blkid /dev/xvdf

# Add to fstab
UUID=your-uuid-here /data xfs defaults,nofail 0 2

The nofail option is critical: it prevents a missing or degraded EBS volume from blocking the entire instance boot. Without it, a detached volume can make the instance unresponsive on next start.

Boot-order verification

After editing /etc/fstab, verify before rebooting:

1
2
3
4
5
# Test the fstab entry without rebooting
sudo mount -a

# Confirm the volume is mounted at the expected path
df -h /data

This catches syntax errors in fstab before they cause a boot failure. After confirming mount -a succeeds, test an actual reboot in a non-production instance before relying on the entry in production.

Stop/start behavior (which may reassign device names) is different from reboot behavior — test both if your instances are regularly stopped.

Application boot timing

If the application service starts before the EBS mount is ready, it may create its data directory on the root volume rather than on the EBS volume, leading to a split data state that is unpleasant to diagnose.

In systemd, express the mount dependency explicitly in the service unit:

1
2
[Unit]
RequiresMountsFor=/data

This ensures the application does not start until the mount point is available.

Diagnosing mount timing failures

If boot-time mount behavior is unexpected:

  • Check /var/log/syslog and journalctl -xe for mount timing and error entries
  • Look for application log entries showing writes to root volume paths rather than the expected mount point path
  • Run mount after boot and confirm the EBS volume is at the correct path before the application started writing
  • Check dmesg for XFS or device errors during the boot sequence