Bug 1157238

Summary: [hosted-engine] [iSCSI support] The LUN used for engine VM disk is allowed to be picked for storage domain creation/extension
Product: Red Hat Enterprise Virtualization Manager Reporter: Elad <ebenahar>
Component: ovirt-hosted-engine-setupAssignee: Sandro Bonazzola <sbonazzo>
Status: CLOSED CURRENTRELEASE QA Contact: Elad <ebenahar>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 3.5.0CC: acanan, didi, ecohen, gklein, iheim, lsurette, lveyde, scohen, stirabos, tnisan
Target Milestone: ---   
Target Release: 3.5.0   
Hardware: x86_64   
OS: Unspecified   
Whiteboard: storage
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1067162    
Attachments:
Description Flags
logs and screenshot none

Description Elad 2014-10-26 13:04:24 UTC
Created attachment 950794 [details]
logs and screenshot

Description of problem:
After hosted-engine deployment using iSCSI, I was able to pick the LUN which used for the engine's disk when I tried to create an iSCSI storage domain in the setup. Picking this LUN for creation a new iSCSI storage domain would corrupt engine's disk and the setup would be destroyed.

Version-Release number of selected component (if applicable):
rhev 3.5 vt7
rhel6.6 host
ovirt-hosted-engine-setup-1.2.1-1.el6ev.noarch
rhevm-3.5.0-0.17.beta.el6ev.noarch
vdsm-4.16.7.1-1.el6ev.x86_64


How reproducible:
Always

Steps to Reproduce:
1. Deploy hosted-engine using iSCSI
2. After deployment finished, try to create an iSCSI storage domain in the setup. Pick the LUN which is used by the engine's VM disk


Actual results:
The LUN is possible to be picked in the webadmin. Picking it will cause corruption to the engine's VM disk, this will destroy the setup.


The PV  /dev/mapper/3514f0c5447600138 is used for VG ad31ebd6-90ac-43d7-861f-29849f5fe916 where engine disk is located on (LV f20012e4-da44-4443-9429-404cc90d7098)



#pvs

  PV                            VG                                   Fmt  Attr PSize  PFree 
  /dev/mapper/3514f0c5447600138 ad31ebd6-90ac-43d7-861f-29849f5fe916 lvm2 a--  39.62g 10.50g
  /dev/sdm2                     vg0                                  lvm2 a--  68.06g     0 


# lvs

LV                                   VG                                   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  12242778-22cb-42fb-a197-64c7aa2714c2 ad31ebd6-90ac-43d7-861f-29849f5fe916 -wi-a----- 128.00m                                                    
  c81cea24-889b-4ec0-a682-d97ca6809667 ad31ebd6-90ac-43d7-861f-29849f5fe916 -wi-ao---- 128.00m                                                    
  f20012e4-da44-4443-9429-404cc90d7098 ad31ebd6-90ac-43d7-861f-29849f5fe916 -wi-ao----  25.00g                                                    
  ids                                  ad31ebd6-90ac-43d7-861f-29849f5fe916 -wi-ao---- 128.00m                                                    
  inbox                                ad31ebd6-90ac-43d7-861f-29849f5fe916 -wi-a----- 128.00m                                                    
  leases                               ad31ebd6-90ac-43d7-861f-29849f5fe916 -wi-a-----   2.00g                                                    
  master                               ad31ebd6-90ac-43d7-861f-29849f5fe916 -wi-a-----   1.00g                                                    
  metadata                             ad31ebd6-90ac-43d7-861f-29849f5fe916 -wi-a----- 512.00m                                                    
  outbox                               ad31ebd6-90ac-43d7-861f-29849f5fe916 -wi-a----- 128.00m                                                    
  lv_root                              vg0                                  -wi-ao----  60.20g                                                    
  lv_swap                              vg0                                  -wi-ao----   7.86g          



'luns' table in the DB is empty:


 physical_volume_id | lun_id | volume_group_id | serial | lun_mapping | vendor_id | product_id | device_size 
--------------------+--------+-----------------+--------+-------------+-----------+------------+-------------
(0 rows)


Attached the screenshot from webadmin of the storage domain creation pop-up window

Expected results:
Engine should block user from selecting the LUN which the engine's disk is located on.


Additional info: logs and screenshot

Comment 1 Elad 2014-10-26 14:16:06 UTC
It's also possible to pick the LUN as a direct LUN

Comment 2 Tal Nisan 2014-10-28 10:29:54 UTC
We currently don't have any indication in the engine of this LUN. There has to be a way for the engine to know this LUN before we can block this operation, moving to integration to set up this data so the storage flows can use it.

Comment 3 Sandro Bonazzola 2014-11-05 13:03:42 UTC
http://gerrit.ovirt.org/34783 add the HE disk to the engine.
Now the lun is known to the engine, any other change required on hosted-engine side?

I've verified that having added the disk to the engine is enough for not allowing the same lun to be picked for storage domain creation.
Not sure about how to verify the extension part.

Moving back to storage.

Comment 4 Sandro Bonazzola 2014-11-12 10:07:59 UTC
Moving this to modified, the merged patch seems to solve this bug as per comment #3.

Comment 5 Elad 2014-11-25 09:06:11 UTC
The device used for the engine's disk is now listed as a direct LUN. 
BUT, removing this LUN from the setup is allowed. I'm moving this bug to VERIFIED and openning a separate bug for the remove issue.

Used rhev 3.5 vt11

Comment 6 Elad 2014-11-25 09:28:35 UTC
Remove LUN issue: 
https://bugzilla.redhat.com/show_bug.cgi?id=1167668

Comment 7 Allon Mureinik 2015-02-16 19:11:41 UTC
RHEV-M 3.5.0 has been released, closing this bug.

Comment 8 Allon Mureinik 2015-02-16 19:11:43 UTC
RHEV-M 3.5.0 has been released, closing this bug.