Description of problem: Upstream bug: https://bugs.launchpad.net/ubuntu/+source/cloud-utils/+bug/1259703 Version-Release number of selected component (if applicable): cloud-utils-growpart-0.27-13.el7.noarch How reproducible: confirmed in 2 cases, but probably all of the time Steps to Reproduce: 1. Refer to upstream bug report 2. in OSP 7, overcloud nodes with disks > 2TiB end up with random root partition size Actual results: Expected results: Additional info: Is it possible that we run into [1], which wasn't fixed in [2], and the root cause is [3] and [4]? [1] https://bugs.launchpad.net/ubuntu/+source/cloud-utils/+bug/1259703 [2] I guess that there is a high risk that our rpm is older than cannonical's patch in [1] (I didn't bother to check the source code, though) [root@overcloud-controller-0 ~]# rpm -ql --changelog cloud-utils-growpart-0.27-13.el7.noarch * Tue Mar 18 2014 Lars Kellogg-Stedman <lars> - 0.27-13 - suppress partx usage error * Tue Jan 14 2014 Lars Kellogg-Stedman <lars> - 0.27-11 - import into RHEL [3] We still use MBR on the overcloud Model: Virtio Block Device (virtblk) Disk /dev/vda: 64.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 2097kB 1049kB primary 2 2097kB 64.4GB 64.4GB primary xfs boot [4] https://en.wikipedia.org/wiki/Master_boot_record (...) MBR partition entries and the MBR boot code used in commercial operating systems, however, are limited to 32 bits. Therefore, the maximum disk size supported on disks using 512-byte sectors (whether real or emulated) by the MBR partitioning scheme (without using non-standard methods) is limited to 2 TiB (...)
Hi, I'm wondering if we may need fix on Bug 1290272 to have over 2TB partition with UEFI. I think some of important customer strongly want to get over 2TB partition with EFI on ironic deployment too Could we proceed this sooner to achieve it on both growpart side and ironic side?
Hi Lars, After checking upstream for a while, we need following fixes in newer revision as well to ensure customer's request to have over 2TB partition with growpart, since only with revision 250, we'll just not prevent crash but can not create partition with gpt and (convert mbr to gpt if there ). I agree that this may need rebase again though this is important parts for deployment and we have to handle this with extreme care on OSP8 repo.., but I think it's also fact this is sort of simple shell script and we may be able to have clear fix for it. Can you pleaese take a look into this to triage this? - use sfdisk 2.26 if available for gpt resizing http://bazaar.launchpad.net/~cloud-utils-dev/cloud-utils/trunk/revision/266.1.14 - growpart: when growing dont grow past the secondary gpt table http://bazaar.launchpad.net/~cloud-utils-dev/cloud-utils/trunk/revision/269#bin/growpart Thank you,
Upstream has finally produced some new releases, so I am working on a 0.7.8 package this week. I will update this bz as soon as that is available.
So, the previous comment is misleading because I was thinking "cloud-init" but of course this is "growpart". I will add a new cloud-utils-growpart release to my todo list...
*** Bug 1372206 has been marked as a duplicate of this bug. ***
is it possible to list the instructions here if someone currently on the latest RH supported OSP10 version wants to test the bug fix (which is currently on QA) ? Thank you
Raising the Customer Escalation Flag.
On a KVM-virtualization based director environment , i was able to test successfully with msdos partition : [heat-admin@overcloud-controller-0 ~]$ sudo parted /dev/sda print Model: ATA QEMU HARDDISK (scsi) Disk /dev/sda: 4374GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 2097kB 1049kB primary 2 2097kB 2199GB 2199GB primary xfs boot [heat-admin@overcloud-controller-0 ~]$ df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/sda2 xfs 2.0T 3.4G 2.0T 1% / devtmpfs devtmpfs 5.7G 0 5.7G 0% /dev tmpfs tmpfs 5.7G 0 5.7G 0% /dev/shm tmpfs tmpfs 5.7G 704K 5.7G 1% /run tmpfs tmpfs 5.7G 0 5.7G 0% /sys/fs/cgroup tmpfs tmpfs 1.2G 0 1.2G 0% /run/user/1000 --------------------------- I also tested disk label as gpt with the following commands & it worked out as expected : ironic node-update <node-uuid> add properties/capabilities='disk_label:gpt' nova flavor-key baremetal set capabilities:disk_label="gpt" [heat-admin@overcloud-controller-0 ~]$ df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/sda3 xfs 4.0T 3.5G 4.0T 1% / devtmpfs devtmpfs 5.7G 0 5.7G 0% /dev tmpfs tmpfs 5.7G 0 5.7G 0% /dev/shm tmpfs tmpfs 5.7G 444K 5.7G 1% /run tmpfs tmpfs 5.7G 0 5.7G 0% /sys/fs/cgroup tmpfs tmpfs 1.2G 0 1.2G 0% /run/user/1000 [heat-admin@overcloud-controller-0 ~]$ sudo parted /dev/sda print Model: ATA QEMU HARDDISK (scsi) Disk /dev/sda: 4374GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 2097kB 1049kB primary bios_grub 2 2097kB 3146kB 1049kB primary 3 3146kB 4374GB 4374GB xfs primary All results look as expected . Was there any thing else that needed testing ?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0871