Description of problem: After reassigning host from datacenter to another, it's lvm is messed up. Version-Release number of selected component (if applicable): vdsm-4.13.2-0.8.el6ev.x86_64 How reproducible: 100% Steps to Reproduce: 1. Add host to the Default datacenter using NFS storage and run a VM on it. 2. Create new datacenter using iSCSI. 3. Stop the VM, put the host into maintanance and reassign it to the other datacenter. 4. Run another VM (from the new datacenter) on the host. 5 Ssh to the host and run lvs Actual results: # lvs /dev/mapper/1IET_00010001: read failed after 0 of 4096 at 21474770944: Input/output error /dev/mapper/1IET_00010001: read failed after 0 of 4096 at 21474828288: Input/output error /dev/mapper/1IET_00010001: read failed after 0 of 4096 at 0: Input/output error /dev/mapper/1IET_00010001: read failed after 0 of 4096 at 4096: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/metadata: read failed after 0 of 4096 at 536805376: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/metadata: read failed after 0 of 4096 at 536862720: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/metadata: read failed after 0 of 4096 at 0: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/metadata: read failed after 0 of 4096 at 4096: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/leases: read failed after 0 of 4096 at 2147418112: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/leases: read failed after 0 of 4096 at 2147475456: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/leases: read failed after 0 of 4096 at 0: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/leases: read failed after 0 of 4096 at 4096: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/ids: read failed after 0 of 4096 at 134152192: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/ids: read failed after 0 of 4096 at 134209536: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/ids: read failed after 0 of 4096 at 0: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/ids: read failed after 0 of 4096 at 4096: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/inbox: read failed after 0 of 4096 at 134152192: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/inbox: read failed after 0 of 4096 at 134209536: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/inbox: read failed after 0 of 4096 at 0: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/inbox: read failed after 0 of 4096 at 4096: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/outbox: read failed after 0 of 4096 at 134152192: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/outbox: read failed after 0 of 4096 at 134209536: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/outbox: read failed after 0 of 4096 at 0: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/outbox: read failed after 0 of 4096 at 4096: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/master: read failed after 0 of 4096 at 1073676288: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/master: read failed after 0 of 4096 at 1073733632: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/master: read failed after 0 of 4096 at 0: Input/output error /dev/4ae49b1b-550b-4a17-b545-dcbfaef98303/master: read failed after 0 of 4096 at 4096: Input/output error LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert b3274055-9e41-488f-954a-625a3389a27a 4ae49b1b-550b-4a17-b545-dcbfaef98303 -wi------- 6.00g ids 4ae49b1b-550b-4a17-b545-dcbfaef98303 -wi-a----- 128.00m inbox 4ae49b1b-550b-4a17-b545-dcbfaef98303 -wi-a----- 128.00m leases 4ae49b1b-550b-4a17-b545-dcbfaef98303 -wi-a----- 2.00g master 4ae49b1b-550b-4a17-b545-dcbfaef98303 -wi-a----- 1.00g metadata 4ae49b1b-550b-4a17-b545-dcbfaef98303 -wi-a----- 512.00m outbox 4ae49b1b-550b-4a17-b545-dcbfaef98303 -wi-a----- 128.00m lv_home vg_slot7 -wi-ao---- 20.00g lv_root vg_slot7 -wi-ao---- 50.00g lv_shit vg_slot7 -wi-ao---- 10.00g lv_swap vg_slot7 -wi-ao---- 9.82g Expected results: Proper clean-up when the host is removed from datacenter Additional info: Not sure if one datacenter has to use NFS and the other iSCSI in order to reproduce. Maybe it would happen as well if the same type, but different instance of storage is used.
Nir, is this resolved with http://gerrit.ovirt.org/#/c/24088/ ? Petr, a possible source of problems would be that your tgtd server is not configured to create LUNs with unique IDs. what you need to do is edit your targets.conf and add the scsi_id and scsi_sn fields. Example: <target MasterBackup> allow-in-use yes <backing-store /dev/vg0/MasterBackup> lun 1 scsi_id MasterBackup scsi_sn 444444444401 </backing-store> </target>
(In reply to Ayal Baron from comment #1) > Nir, is this resolved with http://gerrit.ovirt.org/#/c/24088/ ? I don't see any connection. This is what happen when you move a host to maintainace - we disconnect from storage but leave junk devices behind.
(In reply to Nir Soffer from comment #2) > (In reply to Ayal Baron from comment #1) > > Nir, is this resolved with http://gerrit.ovirt.org/#/c/24088/ ? > > I don't see any connection. > > This is what happen when you move a host to maintainace - we disconnect from > storage but leave junk devices behind. ok, so we need to clean it up...
This is just warnings from lvm commands, and it does not have any effect on the funcionality of the system. No reason for high priority.
(In reply to Ayal Baron from comment #3) > (In reply to Nir Soffer from comment #2) > > (In reply to Ayal Baron from comment #1) > > > Nir, is this resolved with http://gerrit.ovirt.org/#/c/24088/ ? > > > > I don't see any connection. > > > > This is what happen when you move a host to maintainace - we disconnect from > > storage but leave junk devices behind. > > ok, so we need to clean it up... The solution is to deactivate all the lvs of the vg on the iscsi connection that we are about to disconnect.
this is an automated message. oVirt 3.6.0 RC3 has been released and GA is targeted to next week, Nov 4th 2015. Please review this bug and if not a blocker, please postpone to a later release. All bugs not postponed on GA release will be automatically re-targeted to - 3.6.1 if severity >= high - 4.0 if severity < high
Closing old tickets, in medium/low severity. If you believe it should be re-opened, please do so and add justification. (Also, probably will be solved when dep RFEs will be implemented)
Fixed in bug 1331978
(In reply to Nir Soffer from comment #9) > Fixed in bug 1331978 Reopening based on that comment, so QE can verify this scenario when bug 1331978 is fixed.
oVirt 4.0 Alpha has been released, moving to oVirt 4.0 Beta target.
oVirt 4.0 beta has been released, moving to RC milestone.
There is no engineering item here - moving to ON_QA after talking to Aharon, but note that bug 1331978 needs to be fixed in order to verify this.
Tested with the following code: ---------------------------------------- vdsm-4.18.999-759.git435a852.el7.centos.x86_64 rhevm-4.0.6-0.1.el7ev.noarch Tested with the following scenario: ---------------------------------------- Steps to Reproduce: 1. Add host to the Default datacenter using NFS storage and run a VM on it. 2. Create new datacenter using iSCSI. 3. Stop the VM, put the host into maintanance and reassign it to the other datacenter. 4. Run another VM (from the new datacenter) on the host. 5 Ssh to the host and run lvs VM RUNS FINE AND NO ERRORS ARE REPORTED BY LVS ON THE HOST! MOVING TO VERIFIED!