Bug 1357068 - [RFE] Raise an error in case that an installation requirement is not met (no thin or no separate /var)
Summary: [RFE] Raise an error in case that an installation requirement is not met (no ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-node
Classification: oVirt
Component: Installation & Update
Version: 4.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.1.0-beta
: 4.1
Assignee: Douglas Schilling Landgraf
QA Contact: Qin Yuan
URL:
Whiteboard:
: 1415225 (view as bug list)
Depends On:
Blocks: 1338732 1370433 1390062
TreeView+ depends on / blocked
 
Reported: 2016-07-15 15:50 UTC by daniel
Modified: 2017-03-31 17:37 UTC (History)
20 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1390062 (view as bug list)
Environment:
Last Closed: 2017-02-15 14:50:52 UTC
oVirt Team: Node
Embargoed:
fdeutsch: ovirt-4.1?
ykaul: exception?
cshao: testing_plan_complete+
rule-engine: planning_ack?
fdeutsch: devel_ack+
ycui: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1359181 0 high CLOSED Docs: RHVH: Add documentation for requires in case of custom partitioning 2021-02-22 00:41:40 UTC
oVirt gerrit 65737 0 'None' MERGED imgbase: add validation for missing thin pool 2020-10-22 09:34:35 UTC
oVirt gerrit 67710 0 'None' MERGED osupdates: check if /var is separate partition 2020-10-22 09:34:35 UTC

Internal Links: 1359181

Description daniel 2016-07-15 15:50:43 UTC
Description of problem:

Installing rhvh 4.0: RHEV-H-7.2-20160627.2-RHVH-x86_64-dvd1.iso and selecting custom partitioning and profile other than thin-pool in the dialog to suggest a partitioning ( "Click here to create them automatically" )leads to trace at the end of installation process:



There was an error running the kickstart script at line 9.  This is a fatal error and installation will be aborted.  The details of this error are:

[INFO] Trying to create a manageable base from '/'
Traceback (most recent call last):
  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/usr/lib/python2.7/site-packages/imgbased/__main__.py", line 42, in <module>
    CliApplication()
  File "/usr/lib/python2.7/site-packages/imgbased/__init__.py", line 80, in CliApplication
    app.hooks.emit("post-arg-parse", args)
  File "/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in emit
    cb(self.context, *args)
  File "/usr/lib/python2.7/site-packages/imgbased/plugins/core.py", line 167, in post_argparse
    app.imgbase.init_layout_from(args.source, init_nvr)
  File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 230, in init_layout_from
    self.init_tags_on(existing_lv)
  File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 217, in init_tags_on
    pool.addtag(self.thinpool_tag)
AttributeError: 'NoneType' object has no attribute 'addtag'


Version-Release number of selected component (if applicable):  RHEV-H-7.2-20160627.2-RHVH-x86_64-dvd1.iso 


How reproducible:


Steps to Reproduce:
1. install rhvh from iso
2. select custom partitioning and "Click here to create them automatically" (do not choose thin-pool as profile /layout)
3. follow the installation procedd and wait install to finish 
4. above trace at end of installation

Actual results:
- custom partitioning broken if other layout selected than thin-pool based


Expected results:
- only thin-pool profile should be available, no other 
Additional info:

after resetting the host it boots but no thin volumes are there:

[root@localhost ~]# lvs -v
    Using logical volume(s) on command line.
  LV   VG   #Seg Attr       LSize  Maj Min KMaj KMin Pool Origin Data%  Meta%  Move Cpy%Sync Log Convert LV UUID                                LProfile
  root r4b     1 -wi-ao---- 37.94g  -1  -1  253    0                                                     tgNGeQ-oZuB-PoKH-zx9D-fixC-G0TC-To3Y4E         
  swap r4b     1 -wi-ao----  6.00g  -1  -1  253    1                                                     teroio-htEB-qb8U-2HB5-lEzu-b2RJ-URm0Z2         
  var  r4b     1 -wi-ao---- 15.00g  -1  -1  253    2                                                     i6AKVv-hxLS-VzRN-QKBS-2iOt-I3jT-yBkJlQ         
[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# pvs -v
    Using physical volume(s) on command line.
    Found same device /dev/vda2 with same pvid tT7XGbkd6TcR5OVoZyIt3aJOMBiN7hY7
  PV         VG   Fmt  Attr PSize  PFree  DevSize PV UUID                               
  /dev/vda2  r4b  lvm2 a--  59.00g 60.00m  59.00g tT7XGb-kd6T-cR5O-VoZy-It3a-JOMB-iN7hY7
[root@localhost ~]# 





selecting thin-pool in custom layout works fine:
[root@localhost ~]# lvs -a
  LV                           VG   Attr       LSize  Pool   Origin                     Data%  Meta%  Move Log Cpy%Sync Convert
  [lvol0_pmspare]              r4b  ewi------- 48.00m                                                                          
  pool00                       r4b  twi-aotz-- 44.13g                                   3.61   2.69                            
  [pool00_tdata]               r4b  Twi-ao---- 44.13g                                                                          
  [pool00_tmeta]               r4b  ewi-ao---- 48.00m                                                                          
  rhevh7-ng-4.0-0.20160616.0   r4b  Vwi---tz-k 29.09g pool00 root                                                              
  rhevh7-ng-4.0-0.20160616.0+1 r4b  Vwi-aotz-- 29.09g pool00 rhevh7-ng-4.0-0.20160616.0 4.74                                   
  root                         r4b  Vwi-a-tz-- 29.09g pool00                            4.72                                   
  swap                         r4b  -wi-ao----  6.00g                                                                          
  var                          r4b  Vwi-aotz-- 15.00g pool00                            1.17                                   



to my understanding the trace is from the hook running at the end of installation creating the snapshots which is not possible without the pool at least not with this hook

Comment 1 Fabian Deutsch 2016-07-17 15:38:05 UTC
Currently this should be discovered by the right documentation (explicitly stating that thin partitioning is required).

In future we might want to enhance our install class to raise a meaningful error or prevent installation at all if thin provisioning is not used. (This RFE would be something for something like ~4.2)

Daniel, should this bug cover a) that the documentation is correct or b) be an RFE to cover the install class enhancement?

Comment 2 daniel 2016-07-17 17:09:12 UTC
Fabian,

my understanding is that the thinpool construct is essential to make rhvh-node-ng work as it is designed.
The problematic thing here is to my understanding that if one chooses to do custom partitioning one gets an error but if you reset the server then everything "seems to work as expected". But i guess - and probably I'm wrong -  further updates will fail in case there are no thinpools which will make customers raise cases and getting upset if they have to reinstall the hv then with "correct" partitioning.

From my point of view we should not allow configurations which do make the system  cause raising errors or break further updates,.... (I guess custom partitioning is needed in case I'd like to use hyperconverged when it is supported) as this will increase cases for cee.


In case this is not possible until GA we should highlight this in the documentation, indeed, although I guess lots of customers will not read it and still raise cases.

Overall I think having an installation option that causes errors or makes features fail is not a RFE but a bug.

Comment 3 cshao 2016-07-22 08:56:08 UTC
Agree with #c1, RHVH should choose thin-pool as layou, but for this bug, I can reproduce it.

Test version:
RHEV-H-7.2-20160627.2-RHVH-x86_64-dvd1.iso

Test steps:
1. Install rhvh from iso
2. Select custom partitioning
3. Select "Click here to create them automatically" (do not choose thin-pool as profile /layout)
4. follow the installation procedd and wait install to finish 
5. above trace at end of installation

Comment 4 Fabian Deutsch 2016-07-22 12:46:52 UTC
I agree that we want to check the partitioning during the installation to prevent that the user ends up with an incorrect storage configuration.

For GA however we'll need to get away with documentation (I've filed bug 1359181 for this).

This bug can be kept to enhance our install class to catch this and raises an error in case that the requirements are not met.

Comment 5 Red Hat Bugzilla Rules Engine 2016-08-29 16:05:13 UTC
Bug tickets must have version flags set prior to targeting them to a release. Please ask maintainer to set the correct version flags and only then set the target milestone.

Comment 6 Ryan Barry 2016-08-30 00:05:34 UTC
(In reply to Fabian Deutsch from comment #4)
> This bug can be kept to enhance our install class to catch this and raises
> an error in case that the requirements are not met.

I'll ask the Anaconda team, but as far as I can tell, this cannot be handled from the installclass.

It appears that we can set autopartitioning requirements, but pyanaconda/storage_utils.sanity_check appears to inherit nothing from the installclass.

I'll keep looking, but the appropriate place to output an error message is probably during imgbased initialization.

Comment 7 Fabian Deutsch 2016-09-03 11:43:46 UTC
Considering comment 6 I'd favor that we rather improve imgbased to raise _meaningful_ errors and add errors if requirements are not met.

Modifying anaconda seems to be a larger gap.

Comment 12 Ryan Barry 2016-12-15 13:51:40 UTC
This is merged. Shouldn't this be on MODIFIED?

Comment 17 Qin Yuan 2017-01-11 06:51:16 UTC
Test Version:
redhat-virtualization-host-4.1-0.20170104.0
imgbased-0.9.2-0.1.el7ev.noarch

1.no thin pool

Test Steps:
1) Install RHVH 4.1 via interactive anaconda.
2) Enter MANUAL PARTITIONING page via choosing "I will configure partitioning" in the INSTALLATION DESTINATION page.
3) Select partitioning scheme as LVM, not LVM Thin Provisioning.
4) Click "Click here to create them automatically".
5) Finish the remained steps and wait to see the error window popping up.

Test Results:
A window is popped up at the end of installation but before reboot. There is an error message at the beginning of the window saying:
[ERROR] LVM Thin Provisioning partitioning scheme is required.
For autoinstall via Kickstart with LVM Thin Provisioning check options: --thinpool and --growPlease consult documentation for details

2. no separate /var

Test Steps:
1) Install RHVH 4.1 via interactive anaconda.
2) Enter MANUAL PARTITIONING page via choosing "I will configure partitioning" in the INSTALLATION DESTINATION page.
3) Delete existing partitions if needed.
4) Select partitioning scheme as LVM Thin Provisioning.
5) Add partitions manually using the "+" icon at the lower left corner. Add /, /boot and swap, don't add /var.
6) Finish the remained steps and wait to see the error window popping up.

Test Results:
A window is popped up at the end of installation but before reboot. There is an error message at the bottom of the window saying:
It's required /var as separate mountpoint!
Please check documentation for more details!


In conclusion, there will pop up friendly error messages when no thin pool or no separate /var. 
During the verification process, report a new Bug 1412056. It won't affect the solution of this RFE, so change this RFE's status to VERIFIED.

Comment 18 Sandro Bonazzola 2017-02-07 09:14:51 UTC
*** Bug 1415225 has been marked as a duplicate of this bug. ***

Comment 19 BugMasta 2017-02-22 17:54:36 UTC
Closed. Yeah so why am i getting this error when i try to install RHV into kvm guest.

Using the RHV 4.0.6 iso: RHVH-4.0-20170203.0-RHVH-x86_64-dvd1.iso

I'm trying to do a bit of testing with RHV, I know the performance is going to be low running a hypervisor under KVM, but it should work. I've met all the requirements for RHV.

I've partitioned manually with 1.5GB for /boot, 10GB for /, 18 GB for /var, 1.5GB for swap. Allocated 2GB of RAM, 2 cores, copy host CPU configuration so that should ensure the intel virtualisation cpu extensions are provided to VM.

I repeat, I have even added /var as a separate mountpoint.

And i get what looks like pretty much the same error:

There was an error running the kickstart script at line 9.  This is a fatal error and installation will be aborted.  The details of this error are:

[INFO] Trying to create a manageable base from '/'
Traceback (most recent call last):
  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/usr/lib/python2.7/site-packages/imgbased/__main__.py", line 51, in <module>
    CliApplication()
  File "/usr/lib/python2.7/site-packages/imgbased/__init__.py", line 82, in CliApplication
    app.hooks.emit("post-arg-parse", args)
  File "/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in emit
    cb(self.context, *args)
  File "/usr/lib/python2.7/site-packages/imgbased/plugins/core.py", line 163, in post_argparse
    layout.initialize(args.source, args.init_nvr)
  File "/usr/lib/python2.7/site-packages/imgbased/plugins/core.py", line 211, in initialize
    self.app.imgbase.init_layout_from(source, init_nvr)
  File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 230, in init_layout_from
    self.init_tags_on(existing_lv)
  File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 217, in init_tags_on
    pool.addtag(self.thinpool_tag)
AttributeError: 'NoneType' object has no attribute 'addtag'

Comment 20 Qin Yuan 2017-02-23 05:55:04 UTC
The target milestone of this bug is ovirt-4.1.0-beta, and the released 4.1 beta iso is RHVH-4.1-20170203.1-RHVH-x86_64-dvd1.iso.

AFAIK, RHV 4.0.6 iso has not resolved this bug yet.

The traceback info indicates the partitioning scheme is not LVM Thin Provisioning.
Please ensure choosing the "LVM Thin Provisioning" before manually partitioning.

Douglas, could you please confirm that I'm providing the right information?

Comment 21 BugMasta 2017-02-23 09:30:46 UTC
Well, it's just incredibly weak for the installer to allow user to configure a disk layout that wont work. There wasn't even a hint from the installer that thin provisioning was mandatory. 

I hope 4.1 will resolve this issue properly.

Comment 22 Ryan Barry 2017-02-23 12:56:54 UTC
It will not, unfortunately.

This is a constraint/limitation in Anaconda itself.

See: https://bugzilla.redhat.com/show_bug.cgi?id=1412151


Note You need to log in before you can comment on or make changes to this bug.