Bug 865386

Summary: 3.1 - vdsm: vm's become non-responsive while upgrading pool from 3.0 to 3.1
Product: Red Hat Enterprise Linux 6 Reporter: Dafna Ron <dron>
Component: vdsmAssignee: Federico Simoncelli <fsimonce>
Status: CLOSED ERRATA QA Contact: Dafna Ron <dron>
Severity: urgent Docs Contact:
Priority: high    
Version: 6.3CC: abaron, bazulay, fsimonce, hateya, iheim, ilvovsky, jbiddle, lpeer, ykaul
Target Milestone: rcKeywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard: storage
Fixed In Version: vdsm-4.9.6-39.0 Doc Type: Bug Fix
Doc Text:
Previously, issues in the VDSM caused some virtual machines to become non-responsive when attempting to upgrade pools from Red Hat Enterprise Virtualization Manager 3.0 to 3.1. This has now been fixed to ensure no virtual machines will become unresponsive when upgrading.
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-12-04 19:12:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
backend and vdsm logs none

Description Dafna Ron 2012-10-11 10:18:00 UTC
Created attachment 625477 [details]
backend and vdsm logs

Description of problem:

when upgrading the pool from V2 to V3 (rhevm 3.0 to rhevm 3.1) some of the vms become non responsive. 

Version-Release number of selected component (if applicable):

vdsm-4.9.6-37.0.el6_3.x86_64
si20

How reproducible:

100%

Steps to Reproduce:
1. create and run vm's with OS installed on 3.0 pool on multiple domains (I used 5 data domains, 1 export and 1 iso).
2. upgrade to 3.1 pool
3.
  
Actual results:

some of the vm's become non responsive -> up  and back again. 

Expected results:

vm's should not become non responsive

Additional info: logs attached

UUID of one of the vms: 9d714ca2-75af-427a-8aaf-1f3afab74af3

Thread-484::ERROR::2012-10-11 10:16:49,996::task::853::TaskManager.Task::(_setError) Task=`3454f584-ab39-4956-bf67-910dc0a25c27`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 861, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 2603, in getVolumeSize
    vars.task.getSharedLock(STORAGE, sdUUID)
  File "/usr/share/vdsm/storage/task.py", line 1306, in getSharedLock
    self.resOwner.acquire(namespace, resName, resourceManager.LockType.shared, timeout)
  File "/usr/share/vdsm/storage/resourceManager.py", line 706, in acquire
    raise se.ResourceTimeout()
ResourceTimeout: Resource timeout: ()
Thread-484::DEBUG::2012-10-11 10:16:49,998::task::872::TaskManager.Task::(_run) Task=`3454f584-ab39-4956-bf67-910dc0a25c27`::Task._run: 3454f584-ab39-4956-bf67-910dc0a25c27 ('dbed4cf3-a177-49a8-b41c-f12b85db4286', '11d18980-5c97-40ca-b7ff-6d1fa0f01cc8', '65f8e99b-7ee1-4020-8c23-431e3bf81242', '4f8feffd-ac58-4e4b-80b0-e50491aefb45') {} failed - stopping task

Comment 3 Federico Simoncelli 2012-10-12 16:13:38 UTC
commit 528e9b01e9cfd26ed83d0fbcce7e05cd666da313
Author: Federico Simoncelli <fsimonce>
Date:   Thu Oct 11 18:39:00 2012 -0400

    volume: remove domain shared lock from getVolumeSize

http://gerrit.ovirt.org/#/c/8517

Comment 6 Dafna Ron 2012-10-25 13:22:05 UTC
verified on si22.1

Comment 9 errata-xmlrpc 2012-12-04 19:12:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2012-1508.html