Bug 1321373
Summary: | growpart on disk larger than 2TB fails | |||
---|---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Andreas Karis <akaris> | |
Component: | cloud-utils-growpart | Assignee: | Lars Kellogg-Stedman <lars> | |
Status: | CLOSED ERRATA | QA Contact: | Leonid Natapov <lnatapov> | |
Severity: | urgent | Docs Contact: | ||
Priority: | urgent | |||
Version: | 7.0 (Kilo) | CC: | achernet, apevec, cshastri, dpathak, dsafford, jherrman, jraju, lars, lhh, lnatapov, mburns, mfuruta, rkharwar, srevivo, vcojot | |
Target Milestone: | async | Keywords: | ZStream | |
Target Release: | 9.0 (Mitaka) | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | cloud-utils-growpart-0.29-1.el7 | Doc Type: | Bug Fix | |
Doc Text: |
Prior to this update, using the growpart command on a disk larger than 2 TB that had an MBR partition table caused the partition table not to be updated correctly. As a consequence, the partition was in some cases shrunk, which corrupted the file system on the disk. With this update, growpart handles disks larger than 2 TB properly, and the described problem no longer occurs.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1372206 (view as bug list) | Environment: | ||
Last Closed: | 2017-04-04 15:25:54 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1372206 |
Description
Andreas Karis
2016-03-25 20:37:44 UTC
Hi, I'm wondering if we may need fix on Bug 1290272 to have over 2TB partition with UEFI. I think some of important customer strongly want to get over 2TB partition with EFI on ironic deployment too Could we proceed this sooner to achieve it on both growpart side and ironic side? Hi Lars, After checking upstream for a while, we need following fixes in newer revision as well to ensure customer's request to have over 2TB partition with growpart, since only with revision 250, we'll just not prevent crash but can not create partition with gpt and (convert mbr to gpt if there ). I agree that this may need rebase again though this is important parts for deployment and we have to handle this with extreme care on OSP8 repo.., but I think it's also fact this is sort of simple shell script and we may be able to have clear fix for it. Can you pleaese take a look into this to triage this? - use sfdisk 2.26 if available for gpt resizing http://bazaar.launchpad.net/~cloud-utils-dev/cloud-utils/trunk/revision/266.1.14 - growpart: when growing dont grow past the secondary gpt table http://bazaar.launchpad.net/~cloud-utils-dev/cloud-utils/trunk/revision/269#bin/growpart Thank you, Upstream has finally produced some new releases, so I am working on a 0.7.8 package this week. I will update this bz as soon as that is available. So, the previous comment is misleading because I was thinking "cloud-init" but of course this is "growpart". I will add a new cloud-utils-growpart release to my todo list... *** Bug 1372206 has been marked as a duplicate of this bug. *** is it possible to list the instructions here if someone currently on the latest RH supported OSP10 version wants to test the bug fix (which is currently on QA) ? Thank you Raising the Customer Escalation Flag. On a KVM-virtualization based director environment , i was able to test successfully with msdos partition : [heat-admin@overcloud-controller-0 ~]$ sudo parted /dev/sda print Model: ATA QEMU HARDDISK (scsi) Disk /dev/sda: 4374GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 2097kB 1049kB primary 2 2097kB 2199GB 2199GB primary xfs boot [heat-admin@overcloud-controller-0 ~]$ df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/sda2 xfs 2.0T 3.4G 2.0T 1% / devtmpfs devtmpfs 5.7G 0 5.7G 0% /dev tmpfs tmpfs 5.7G 0 5.7G 0% /dev/shm tmpfs tmpfs 5.7G 704K 5.7G 1% /run tmpfs tmpfs 5.7G 0 5.7G 0% /sys/fs/cgroup tmpfs tmpfs 1.2G 0 1.2G 0% /run/user/1000 --------------------------- I also tested disk label as gpt with the following commands & it worked out as expected : ironic node-update <node-uuid> add properties/capabilities='disk_label:gpt' nova flavor-key baremetal set capabilities:disk_label="gpt" [heat-admin@overcloud-controller-0 ~]$ df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/sda3 xfs 4.0T 3.5G 4.0T 1% / devtmpfs devtmpfs 5.7G 0 5.7G 0% /dev tmpfs tmpfs 5.7G 0 5.7G 0% /dev/shm tmpfs tmpfs 5.7G 444K 5.7G 1% /run tmpfs tmpfs 5.7G 0 5.7G 0% /sys/fs/cgroup tmpfs tmpfs 1.2G 0 1.2G 0% /run/user/1000 [heat-admin@overcloud-controller-0 ~]$ sudo parted /dev/sda print Model: ATA QEMU HARDDISK (scsi) Disk /dev/sda: 4374GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 2097kB 1049kB primary bios_grub 2 2097kB 3146kB 1049kB primary 3 3146kB 4374GB 4374GB xfs primary All results look as expected . Was there any thing else that needed testing ? Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0871 |