Description of problem: When using nvme disks in aws, it appears the kubelet is not able to retrieve the partition type: - The first time the volume is attached to a node it gets partitioned, it's formatted and mounted. - Subsequents tries to attach and mount the volume fails because the partition can't be detected. It ends failing with: ===== failed to mount the volume as \"xfs\", it already contains unknown data, probably partitions. Mount error: mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/aws-ebs/mounts/aws/eu-west-1b/vol-00888f74479ee1602 --scope -- mount -t xfs -o defaults /dev/nvme2n1 /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/aws-ebs/mounts/aws/eu-west-1b/vol-00888f74479ee1602\nOutput: Running scope as unit run-17520.scope.\nmount: wrong fs type, bad option, bad superblock on /dev/nvme2n1 ===== Before this happens, the kubelet is not able to retrieve the partition type (dos): ===== mount_linux.go:549] Attempting to determine if disk "/dev/nvme2n1" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/nvme2n1]) mount_linux.go:552] Output: "DEVNAME=/dev/nvme2n1\nPTTYPE=dos\n", err: <nil> mount_linux.go:590] Disk dos detected partition table type: %!s(MISSING) ===== A blkid shows the correct output: # blkid -p -s TYPE -s PTTYPE -o export /dev/nvme2n1 DEVNAME=/dev/nvme2n1 PTTYPE=dos The correct partition to mount should be nvme2n1p1: nvme2n1 259:5 0 75G 0 disk `-nvme2n1p1 259:6 0 75G 0 part I'm attaching full logs, pvc, pv, etc. Version-Release number of selected component (if applicable): 3.11.98 How reproducible: N/A - I don't have an environment to reproduce this