Red Hat Bugzilla – Bug 1313588
ovirt host broke local iscsi server
Last modified: 2017-03-19 10:56:44 EDT
Description of problem:
If we have iscsi and host on one server we get confuguration issue after reload.
Steps to Reproduce:
1. Create LVM partition
2. Add it to targetcli
3. install ovirt host
4. add iscsi to storagedomain
5. Reoot host
vsdm create LVM partition in iscsi that allready LVM partition.
After reboot LVM see recurced partition throuth multitarget and DM (iscis->LVM->PV-VG->DomainLVM)
Targertcli can't accesss to it becouse Device is Busy.
You may add filer to LVM and targetcli and also delete some DM devices, but with filter vsdm dont see domain after reboot.
All must work normal. All small companyes have many services in one server.
*** Bug 1313587 has been marked as a duplicate of this bug. ***
(In reply to Badalyan Vyacheslav from comment #0)
> Description of problem:
> If we have iscsi and host on one server we get confuguration issue after
> How reproducible:
> Steps to Reproduce:
> 1. Create LVM partition
What do you mean?
Pleas share the output of lsblk and fdisk (list partition table) before
installing the host and after it.
> 2. Add it to targetcli
> 3. install ovirt host
> 4. add iscsi to storagedomain
Did you add iscsi storage domain using a device used by targetcli?
> 5. Reoot host
> Actual results:
> vsdm create LVM partition in iscsi that allready LVM partition.
Vdsm does not create LVM partitions. Vdsm create a VG from the multipath
devices you selected in engine UI.
If you selected a device which is already used, Engine will warn you
and you have to confirm the operation. If you confirmed, you cannot
complain that ovirt did what you asked for :-)
Please also attach engine and vdsm logs showing the time when you created this
I remove iscsi and go to NFS.
1. I was have VG - centos and LV iscsi
2. I add LV iscsi to targetcli as backpain.
3. I conect Engine to iscisi.
4. Engine create VG in iscisi.
5. I reboot
6. LVM or multitarget or dm found VG-LV-VG-LV
7. targetcli can't start iscsi becouse "device is busy"
8. Result - Storage is DOWN.
I also try don't create LVM and use "clear" partition. Result same! If iscsi and host its one HOST, then VG partitions found before targetcli. Result - "device is busy" and iscsi is broken.
It's very popular case use Storage in host wehere work VM. I have raid in SSD with 4GB/s speed and bonding with 4 e1000 can't use all perfomance. Bandwidth, MTU and many other things.
Why you create LVM on iscsi? Many chip Storages use linux and targetcli and may have some issue. Why you don't create storage domain like in NFS in file mode? Why need create VG and LV? Using LVM will broke all iscsi that use LVM or Device Partition.
Nir - again another auto-activation variant? Sounds quite similar to what Lev had in ovirt-system-tests, btw.
We disabled lvmetad service, so this may work for you now.
Can you test with current 4.1.1 nightly?
We believe this issue should be solved in 4.1.1, since we disabled lvmetad.
No patches attached to the bug, there is not way for the bot to know if this is in a build or not.
please make sure to attach the relevant patches from gerrit.
(In reply to Eyal Edri from comment #7)
> No patches attached to the bug, there is not way for the bot to know if this
> is in a build or not.
> please make sure to attach the relevant patches from gerrit.
This seems to have been fixed by https://gerrit.ovirt.org/#/c/71674/, which is part of 4.19.6.
Moving to ON_QA
This should be fixed in 4.1.1, please reopen if you still have a issue.