Fix EC2 NVMe Mount Issues After Reboot Using UUIDs (AWS EBS Guide)

Fix EC2 NVMe Mount Mismatch After Reboot — The UUID Way

When working with multiple EBS volumes on an EC2 instance—especially NVMe-backed ones—you might notice that after a reboot, your mount points suddenly don't match the volumes you expected.

This guide explains the root cause, shows how to fix it using UUIDs, and shares some patterns and pro tips around volume device names.


The Problem

After rebooting, your mount points might look correct on the surface but are actually pointing to the wrong disks.

Before reboot:

/dev/nvme1n1 → 300G → /opt/puppetlabs
/dev/nvme2n1 → 50G  → /var/log/puppetlabs
/dev/nvme3n1 → 30G  → /etc/puppetlabs

After reboot:

/dev/nvme1n1 → 30G  → /opt/puppetlabs ❌
/dev/nvme2n1 → 50G  → /var/log/puppetlabs ✅
/dev/nvme3n1 → 300G → /etc/puppetlabs ❌

This can cause services to fail or logs to be written in the wrong location.


Root Cause: NVMe Device Name Reordering

Device names like /dev/nvme1n1 are assigned dynamically by the Linux kernel at boot time. They depend on:

  • Volume size (smaller volumes may initialize faster)
  • Attachment order (not guaranteed by EC2)
  • IO path timing

So what was /dev/nvme1n1 yesterday might be /dev/nvme3n1 after a reboot. Spooky, right?


Observation

Many sysadmins (like me!) have noticed this pattern:

/dev/nvme1n1   → 30G
/dev/nvme2n1   → 50G
/dev/nvme3n1   → 300G

Smaller volumes often get lower nvmeXn1 numbers. But don't rely on it — it's not guaranteed.


The Permanent Fix: Use UUIDs in /etc/fstab

Step 1: List Your Volumes and UUIDs

lsblk -f

Sample output:

nvme1n1  xfs  0749b056...  /opt/puppetlabs  # 30G
nvme2n1  xfs  2a545650...  /var/log/puppetlabs  # 50G
nvme3n1  xfs  6d6bbea5...  /etc/puppetlabs  # 300G

Step 2: Identify What Each Volume Contains

mkdir /mnt/test1 /mnt/test2 /mnt/test3
mount /dev/nvme1n1 /mnt/test1
mount /dev/nvme2n1 /mnt/test2
mount /dev/nvme3n1 /mnt/test3

ls /mnt/test1
ls /mnt/test2
ls /mnt/test3

Step 3: Update /etc/fstab

Use UUIDs instead of /dev/nvmeXn1.

UUID=6d6bbea5-ff46-47f9-8ae2-b0281579e999   /opt/puppetlabs     xfs   defaults,nofail   0 2
UUID=2a545650-9df2-4811-b400-5f9bc25f5467   /var/log/puppetlabs xfs   defaults,nofail   0 2
UUID=0749b056-7c18-4fea-88ba-b7196a5dafcf   /etc/puppetlabs     xfs   defaults,nofail   0 2

Step 4: Apply Changes

umount /opt/puppetlabs
umount /etc/puppetlabs
mount -a

Verify everything is correct:

df -hT

Then reboot:

sudo reboot

Bonus: Identify EBS Volumes from Linux

Use the following command to match NVMe devices with EBS volume IDs:

sudo nvme list

This helps map /dev/nvme1n1 to vol-xxxxxxxx in your AWS console.


Pro Tips

  • Always use UUIDs or LABELs for mounting volumes — never /dev/nvmeXn1
  • Backup your /etc/fstab before editing:
cp /etc/fstab /etc/fstab.backup
  • Use xfs_admin -L to label volumes with readable names:
sudo xfs_admin -L optdata /dev/nvmeXn1

Conclusion

Dynamic device naming is one of those silent little issues in cloud infrastructure that can cause major headaches. But by switching to UUID-based mounts, you ensure your volumes are mounted correctly every time — no matter how the system orders them.

Keep calm and UUID on 💚

Comments

Popular Posts

Puppet Code Deploy Troubleshooting & Resolution Guide

Fix: SSH Permission Denied Issue | Real Solution

Linux Process Termination Signals Explained with Examples