Description of problem: validation for pid required in '/usr/share/vdsm/supervdsmServer' @logDecorator def getVmPid(self, vmName): pidFile = "/var/run/libvirt/qemu/%s.pid" % vmName return open(pidFile).read() better to use like this: pidFile = "/var/run/libvirt/qemu/%s.pid" % vmName try: o = open(pidFile) pid = o.read().strip('\n') if len(pid) > 3; # mininum 3 digits all_vms_pids.append( (pid, vmName) ) # mange private list in order to reduce the performance to not read files to much o.close() very important else: #todo:// do something in case less than 3 digits except Exception as ex: #todo:// do something if no file exist suhtdown vm finally: if o: o.close() return pid Version-Release number of selected component (if applicable): vdsm-4.14.11-1.el6ev.x86_64 Actual results: function returns nothing, there is no validation for pid. Expected results: -validation for pid -close the file object reduce resources -manage list of pids instead of read the file every time the function being called Additional info:
set CACHED_PIDS={} in '_SuperVdsm' @logDecorator def getVmPid(self, vmName): pidFile = "/var/run/libvirt/qemu/%s.pid" % vmName file_reader = None pid = None try: #check if vm cached if vmName in str(self.CACHED_PIDS): pid = self.CACHED_PIDS[vmName] else: file_reader = open(pidFile) pid = file_reader.read().strip('\n') if pid.isdigit(): self.CACHED_PIDS.append((vmName, pid)) except Exception as ex: self.log.error("getVmPid failed shutdown vm \'%s\' required" % vmName) pass finally: if file_reader: file_reader.close() else: self.log.error("no pid file exist for \'%s\'" % vmName) if pid: return pid
I don't see a point of the above, caching is useless as this is called just once. however, we want to re-asses how do we handle 0 PIDs (proposal is to destroy() in domDependentInit if we can't read PID) and we should fix the file read() to close the FD Moving to Francesco to address that
*** Bug 1128445 has been marked as a duplicate of this bug. ***
Hi can you please provide me the best way to verify this bug?
(In reply to Artyom from comment #4) > Hi can you please provide me the best way to verify this bug? Hi, it is not easy unless you can pause qemu in the middle of its startup. Probably the best way to test this is on vdsm recovery: 1. make sure QEMU pid doesn't change across VDSM recoveries. Of course this assumes QEMU process is never restarted or harmed in any way 2. this is tricky anyway. 2.a. stop VDSM 2.b. manually tamper VDSM recovery file, removing the PID 2.c. manually tamper the pid (/var/run/libvirt/qemu/*.pid) to include invalid values, like zero or anything < 0 2.d. restart VDSM: vm should fail to recover #1 should be quick and easy #2 is pretty hackish, and we're dealing with extreme conditions here, not sure the test is worth the effort.
Thanks for explanation verifying on: vdsm-4.17.0-632.git19a83a2.el7.x86_64
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0362.html