Bug 1654555
| Summary: | [RFE] Extending HostedEngine disk during deploy should not extend root filesystem | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Marcus West <mwest> |
| Component: | ovirt-hosted-engine-setup | Assignee: | Simone Tiraboschi <stirabos> |
| Status: | CLOSED ERRATA | QA Contact: | Nikolai Sednev <nsednev> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 4.1.11 | CC: | aefrat, dfediuck, ebenahar, jortialc, lsurette, mavital, mkalinin, mtessun, mwest, rdlugyhe, stirabos, yturgema |
| Target Milestone: | ovirt-4.4.0 | Keywords: | FutureFeature, Triaged |
| Target Release: | --- | Flags: | lsvaty:
testing_plan_complete-
|
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | rhv-4.4.0-29 | Doc Type: | Bug Fix |
| Doc Text: |
Previously the `/` filesystem automatically grew to fit the whole disk, and the user could not increase the size of `/var` or `/var/log`. This happened because, if a customer specified a disk larger than 49 GB while installing the Hosted Engine, the whole logical volume was allocated to the root (`/`) filesystem. In contrast, for the RHVM machine, the critical filesystems are `/var` and `/var/log`.
The current release fixes this issue. Now, the RHV Manager appliance is based on the logical volume manager (LVM). At setup time, its PV and VG are automatically extended, but the logical volumes (LVs) are not. As a result, after installation is complete, you can extend all of the LVs in the Manager VM using the free space in the VG.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-08-04 13:26:25 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | Integration | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1579000 | ||
re-targeting to 4.3.1 since this BZ has not been proposed as blocker for 4.3.0. If you think this bug should block 4.3.0 please re-target and set blocker flag. Moving to 4.3.2 not being identified as blocker for 4.3.1. *** Bug 1751665 has been marked as a duplicate of this bug. *** Fixed as a side effect of https://bugzilla.redhat.com/1579000 nsednev-he-1 ~]# fdisk -l
Disk /dev/vda: 120 GiB, 128849018880 bytes, 251658240 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x757d28c9
Device Boot Start End Sectors Size Id Type
/dev/vda1 * 2048 2099199 2097152 1G 83 Linux
/dev/vda2 2099200 121634815 119535616 57G 8e Linux LVM
nsednev-he-1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 120G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 57G 0 part
├─ovirt-root 253:0 0 15G 0 lvm /
├─ovirt-swap 253:1 0 8G 0 lvm [SWAP]
├─ovirt-audit 253:2 0 1G 0 lvm /var/log/audit
├─ovirt-log 253:3 0 10G 0 lvm /var/log
├─ovirt-var 253:4 0 20G 0 lvm /var
├─ovirt-tmp 253:5 0 2G 0 lvm /tmp
└─ovirt-home 253:6 0 1G 0 lvm /home
nsednev-he-1 ~]# lsblk --fs
NAME FSTYPE LABEL UUID MOUNTPOINT
sr0
vda
├─vda1 xfs 9f22855e-6236-4d0a-b6ca-163c0cc3a8d9 /boot
└─vda2 LVM2_member Jg7ywj-7v82-rycg-PH2b-bP8w-sbQg-pqDhET
├─ovirt-root xfs 58f084ce-66e1-4535-a4ac-163bc9d7db9a /
├─ovirt-swap swap db54e26e-33f6-4e48-b746-b5ebf6eb7c35 [SWAP]
├─ovirt-audit xfs 42d29c8a-bbd3-4ed3-a081-d2deeceec275 /var/log/audit
├─ovirt-log xfs 7877ad06-c6a9-4aeb-b5f1-68fdab99b1d0 /var/log
├─ovirt-var xfs 506b6ef3-4219-4df0-9f8d-7c9eac8ae6fb /var
├─ovirt-tmp xfs e1693e0d-2c81-485e-a865-8bb59f167d1c /tmp
└─ovirt-home xfs e621852a-65ab-40ed-88d4-8ae7d05da989 /home
nsednev-he-1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 12K 7.8G 1% /dev/shm
tmpfs 7.8G 8.8M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/mapper/ovirt-root 15G 3.3G 12G 22% /
/dev/mapper/ovirt-var 20G 503M 20G 3% /var
/dev/mapper/ovirt-home 1014M 40M 975M 4% /home
/dev/mapper/ovirt-tmp 2.0G 47M 2.0G 3% /tmp
/dev/mapper/ovirt-log 10G 114M 9.9G 2% /var/log
/dev/mapper/ovirt-audit 1014M 40M 975M 4% /var/log/audit
/dev/vda1 1014M 166M 849M 17% /boot
tmpfs 1.6G 0 1.6G 0% /run/user/0
nsednev-he-1 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
audit ovirt -wi-ao---- 1.00g
home ovirt -wi-ao---- 1.00g
log ovirt -wi-ao---- 10.00g
root ovirt -wi-ao---- <15.00g
swap ovirt -wi-ao---- 8.00g
tmp ovirt -wi-ao---- 2.00g
var ovirt -wi-ao---- 20.00g
nsednev-he-1 ~]# pvscan
PV /dev/vda2 VG ovirt lvm2 [<57.00 GiB / 0 free]
Total: 1 [<57.00 GiB] / in use: 1 [<57.00 GiB] / in no VG: 0 [0 ]
Works for me just fine, "/" not taking all the space during deployment, as you may see I've created disk at the size of 120GB, and "/" had teken only 15G, "/var/log" and "/var" can be manually extended if required.
Tested on:
ovirt-engine-setup-4.4.0-0.33.master.el8ev.noarch
ovirt-hosted-engine-ha-2.4.2-1.el8ev.noarch
ovirt-hosted-engine-setup-2.4.4-1.el8ev.noarch
Linux 4.18.0-193.el8.x86_64 #1 SMP Fri Mar 27 14:35:58 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux release 8.2 (Ootpa)
nsednev-he-1 ~]# fdisk /dev/vda
Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (2 primary, 0 extended, 2 free)
e extended (container for logical partitions)
Select (default p):p
Partition number (3,4, default 3):
First sector (121634816-251658239, default 121634816):
Last sector, +sectors or +size{K,M,G,T,P} (121634816-251658239, default 251658239):
Created a new partition 3 of type 'Linux' and of size 62 GiB.
Command (m for help): t
Partition number (1-3, default 3): 3
Hex code (type L to list all codes):8e
Changed type of partition 'Linux' to 'Linux LVM'.
Command (m for help):w
The partition table has been altered.
Syncing disks.
alma04 ~]# hosted-engine --vm-shutdown
alma04 ~]# hosted-engine --vm-start
alma04 ~]# virsh -r list --all
Id Name State
------------------------------
3 HostedEngine running
nsednev-he-1 ~]# fdisk -l
Disk /dev/vda: 120 GiB, 128849018880 bytes, 251658240 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x757d28c9
Device Boot Start End Sectors Size Id Type
/dev/vda1 * 2048 2099199 2097152 1G 83 Linux
/dev/vda2 2099200 121634815 119535616 57G 8e Linux LVM
/dev/vda3 121634816 251658239 130023424 62G 8e Linux LVM
nsednev-he-1 ~]# pvcreate /dev/vda3
Physical volume "/dev/vda3" successfully created.
nsednev-he-1 ~]# vgdisplay
--- Volume group ---
VG Name ovirt
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 8
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 7
Open LV 7
Max PV 0
Cur PV 1
Act PV 1
VG Size <57.00 GiB
PE Size 4.00 MiB
Total PE 14591
Alloc PE / Size 14591 / <57.00 GiB
Free PE / Size 0 / 0
VG UUID WNb43P-OUJT-mz7p-22ZD-yHrU-YDY5-e06wTR
nsednev-he-1 ~]# vgextend ovirt /dev/vda3
Volume group "ovirt" successfully extended
nsednev-he-1 ~]# vgdisplay
--- Volume group ---
VG Name ovirt
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 7
Open LV 7
Max PV 0
Cur PV 2
Act PV 2
VG Size 118.99 GiB
PE Size 4.00 MiB
Total PE 30462
Alloc PE / Size 14591 / <57.00 GiB
Free PE / Size 15871 / <62.00 GiB
VG UUID WNb43P-OUJT-mz7p-22ZD-yHrU-YDY5-e06wTR
nsednev-he-1 ~]# lvdisplay
--- Logical volume ---
LV Path /dev/ovirt/swap
LV Name swap
VG Name ovirt
LV UUID u5LWRz-Z5fT-zlhY-9KGc-uJK8-74Vd-5xDkj2
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2020-04-17 17:58:25 +0300
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1
--- Logical volume ---
LV Path /dev/ovirt/audit
LV Name audit
VG Name ovirt
LV UUID 4g7mIV-x4i4-ZigB-CvtS-uZJB-svKs-TLSTw1
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2020-04-17 17:58:25 +0300
LV Status available
# open 1
LV Size 1.00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:2
--- Logical volume ---
LV Path /dev/ovirt/log
LV Name log
VG Name ovirt
LV UUID ZBidxJ-yuTx-ozn5-leyj-t4Pp-etbL-J0tDB0
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2020-04-17 17:58:26 +0300
LV Status available
# open 1
LV Size 10.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:3
--- Logical volume ---
LV Path /dev/ovirt/var
LV Name var
VG Name ovirt
LV UUID iWD1YG-ipfZ-hgnB-hSpF-j2G5-kFbn-yw9WPS
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2020-04-17 17:58:26 +0300
LV Status available
# open 1
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:4
--- Logical volume ---
LV Path /dev/ovirt/tmp
LV Name tmp
VG Name ovirt
LV UUID TeJhQQ-z9FO-Ud8N-oz1H-Q2fB-x3E9-sNx9cv
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2020-04-17 17:58:26 +0300
LV Status available
# open 1
LV Size 2.00 GiB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:5
--- Logical volume ---
LV Path /dev/ovirt/home
LV Name home
VG Name ovirt
LV UUID tW7Jee-tWRr-ztvT-yZPN-SOTn-4N7C-68u0CX
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2020-04-17 17:58:27 +0300
LV Status available
# open 1
LV Size 1.00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:6
--- Logical volume ---
LV Path /dev/ovirt/root
LV Name root
VG Name ovirt
LV UUID vSJtDh-SOlM-J2eY-20eQ-t3A3-bpGf-9qJbCp
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2020-04-17 17:58:27 +0300
LV Status available
# open 1
LV Size <15.00 GiB
Current LE 3839
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
nsednev-he-1 ~]# pvscan
PV /dev/vda2 VG ovirt lvm2 [<57.00 GiB / 0 free]
PV /dev/vda3 VG ovirt lvm2 [<62.00 GiB / <62.00 GiB free]
Total: 2 [118.99 GiB] / in use: 2 [118.99 GiB] / in no VG: 0 [0 ]
nsednev-he-1 ~]# lvextend -L+31G /dev/ovirt/var
Size of logical volume ovirt/var changed from 20.00 GiB (5120 extents) to 51.00 GiB (13056 extents).
Logical volume ovirt/var successfully resized.
nsednev-he-1 ~]# lvdisplay
--- Logical volume ---
LV Path /dev/ovirt/var
LV Name var
VG Name ovirt
LV UUID iWD1YG-ipfZ-hgnB-hSpF-j2G5-kFbn-yw9WPS
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2020-04-17 17:58:26 +0300
LV Status available
# open 1
LV Size 51.00 GiB
Current LE 13056
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:4
[root@nsednev-he-1 ~]# lvextend -L+30G /dev/ovirt/tmp
Size of logical volume ovirt/tmp changed from 2.00 GiB (512 extents) to 32.00 GiB (8192 extents).
Logical volume ovirt/tmp successfully resized.
nsednev-he-1 ~]# lvdisplay
--- Logical volume ---
LV Path /dev/ovirt/tmp
LV Name tmp
VG Name ovirt
LV UUID TeJhQQ-z9FO-Ud8N-oz1H-Q2fB-x3E9-sNx9cv
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2020-04-17 17:58:26 +0300
LV Status available
# open 1
LV Size 32.00 GiB
Current LE 8192
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:5
nsednev-he-1 ~]# vgdisplay
--- Volume group ---
VG Name ovirt
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 11
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 7
Open LV 7
Max PV 0
Cur PV 2
Act PV 2
VG Size 118.99 GiB
PE Size 4.00 MiB
Total PE 30462
Alloc PE / Size 30207 / <118.00 GiB
Free PE / Size 255 / 1020.00 MiB
VG UUID WNb43P-OUJT-mz7p-22ZD-yHrU-YDY5-e06wTR
nsednev-he-1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 12K 7.8G 1% /dev/shm
tmpfs 7.8G 8.8M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/mapper/ovirt-root 15G 3.3G 12G 22% /
/dev/mapper/ovirt-home 1014M 40M 975M 4% /home
/dev/mapper/ovirt-tmp 2.0G 47M 2.0G 3% /tmp
/dev/mapper/ovirt-var 20G 503M 20G 3% /var
/dev/mapper/ovirt-log 10G 114M 9.9G 2% /var/log
/dev/mapper/ovirt-audit 1014M 40M 975M 4% /var/log/audit
/dev/vda1 1014M 166M 849M 17% /boot
tmpfs 1.6G 0 1.6G 0% /run/user/0
nsednev-he-1 ~]# xfs_growfs /dev/ovirt/var
meta-data=/dev/mapper/ovirt-var isize=512 agcount=4, agsize=1310720 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=5242880, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 5242880 to 13369344
nsednev-he-1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 12K 7.8G 1% /dev/shm
tmpfs 7.8G 8.8M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/mapper/ovirt-root 15G 3.3G 12G 22% /
/dev/mapper/ovirt-home 1014M 40M 975M 4% /home
/dev/mapper/ovirt-tmp 2.0G 47M 2.0G 3% /tmp
/dev/mapper/ovirt-var 51G 725M 51G 2% /var
/dev/mapper/ovirt-log 10G 114M 9.9G 2% /var/log
/dev/mapper/ovirt-audit 1014M 40M 975M 4% /var/log/audit
/dev/vda1 1014M 166M 849M 17% /boot
tmpfs 1.6G 0 1.6G 0% /run/user/0
nsednev-he-1 ~]# xfs_growfs /dev/ovirt/tmp
meta-data=/dev/mapper/ovirt-tmp isize=512 agcount=4, agsize=131072 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=524288, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 524288 to 8388608
nsednev-he-1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 12K 7.8G 1% /dev/shm
tmpfs 7.8G 8.8M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/mapper/ovirt-root 15G 3.3G 12G 22% /
/dev/mapper/ovirt-home 1014M 40M 975M 4% /home
/dev/mapper/ovirt-tmp 32G 265M 32G 1% /tmp
/dev/mapper/ovirt-var 51G 725M 51G 2% /var
/dev/mapper/ovirt-log 10G 114M 9.9G 2% /var/log
/dev/mapper/ovirt-audit 1014M 40M 975M 4% /var/log/audit
/dev/vda1 1014M 166M 849M 17% /boot
tmpfs 1.6G 0 1.6G 0% /run/user/0
In the end, I successfully extended XFS file system of /var from 20G to 51G and /tmp from 2G to 32G.
Additional unused 62G from the deployment were consumed by /var and /tmp.
It's now possible to extend filesystem of HE-VM after deployment in case of initially created disk was larger than minimum required size.
Moving to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHV RHEL Host (ovirt-host) 4.4), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:3246 +1 for the Doc Text |
## Description of problem: If a customer specifies a disk larger than 49G during installed (HostedEngine), the intended behaviour is to allocate all this to the root ("/") filesystem. For RHVM, the critical filesystems are /var and /var/log. Automatically growing "/" to use the entire disk prevents us from extending the aforementioned ones, or creating a new filesystem (if the user requires). ## Version-Release number of selected component (if applicable): ovirt-engine-4.2.7.4-0.1.el7ev.noarch ## How reproducible: Always ## Steps to Reproduce: 1. Deploy HostedEngine 2. Create a HE disk larger than 49G 3. Post install, try to extend '/var' or '/var/log' ## Actual results: All the space would been taken up by "/" ## Expected results: I'm not sure what the best solution is, but my options are: 1. Don't allocate the additional space. Allow the user to grow (or create) specific filesystem as needed, ie: lvextend -L +X vg/lv xfs_growfs /dev/vg/lv 2. Have some intelligence (disk size profiles?) that size /var and /var/log intelligently, depending on the size of the disk that the user has configured. 3. Specify filesystem sizes during install? '1' is probably the easiest, but does require user intervention post install to grow the filesystems when required. '2' is probalby a bit to error prone (ie, if a filesystem is created too large, it's difficult to shrink it post-install. '3' probably just over-complicates the install process. ## Additional info: The current default layout looks like this: Filesystem Size Used Avail Use% Mounted on /dev/vda3 7.2G 3.3G 3.9G 47% / /dev/mapper/ovirt-var 20G 297M 20G 2% /var /dev/mapper/ovirt-home 1G 33M 982M 4% /home /dev/mapper/ovirt-log 10G 71M 10G 1% /var/log /dev/mapper/ovirt-audit 1G 56M 959M 6% /var/log/audit /dev/vda1 1G 161M 854M 16% /boot <swap> 8G Log Collector is generated on the '/' filesystem right? Some larger environments might run out of space if generating a full LC - is this the reason for having large '/' filesystems? At the time of writing, cloud-init "growpart" doesn't work anyway, so we can actually grow filesystems the way I want. However once/if this is fixed, we we have the problem that I am describing. Also, '/' is currently not on lvm. We need to move it back onto lvm so it can be easily extended if needed. There is an RFE open for this - BZ#1579000