Bug 1175255

Summary: ERROR 'no free file handlers in pool' in vdsm.log while creating VM from template
Product: [Retired] oVirt Reporter: Tiemen Ruiten <tiemen>
Component: ovirt-engine-coreAssignee: Tal Nisan <tnisan>
Status: CLOSED DUPLICATE QA Contact: Pavel Stehlik <pstehlik>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 3.4CC: amureini, bazulay, ecohen, iheim, lsurette, mgoldboi, rbalakri, s.kieske, tiemen, tnisan, yeylon
Target Milestone: ---   
Target Release: 3.5.1   
Hardware: x86_64   
OS: Linux   
Whiteboard: storage
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-01-07 07:18:48 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1193195    
Attachments:
Description Flags
vdsm log file
none
engine log none

Description Tiemen Ruiten 2014-12-17 11:18:26 UTC
Created attachment 970055 [details]
vdsm log file

Description of problem:

ERROR 'no free file handlers in pool' in vdsm.log

Version-Release number of selected component (if applicable):
Name        : vdsm
Arch        : x86_64
Version     : 4.14.17
Release     : 0.el6


How reproducible:
Create a new cloned VM from a template.

Actual results:
Load skyrockets on the hypervisor node, other VMs stop responding. VM creation job fails.

Expected results:
New VM is created and other VMs are not affected.

Additional info:
Two node oVirt cluster running on Xeon(R) CPU E5-2650 0 @ 2.00GHz and 64 GB of RAM. Load at the moment of creating the new VM was less than 1 on both nodes. There are currently 26 VMs running on the two nodes, evenly distributed. 
The ISO domain is a gluster volume exposed through NFS. The storage domain for the VMs is also a gluster volume. The underlying filesystem for the gluster volumes is ZFS.

Comment 1 Sven Kieske 2014-12-17 13:05:38 UTC
maybe this helps:

I did not observe this behaviour with
vdsm-4.13.3-3.el6.x86_64
on a node with much more ram/cpu and vms.

HTH

Sven

Comment 2 Allon Mureinik 2014-12-21 11:40:01 UTC
Tal, can you take a look please?

Comment 3 Allon Mureinik 2014-12-21 17:49:21 UTC
Tiemen, can you please also include the engine's log?

Comment 4 Tiemen Ruiten 2014-12-21 18:01:02 UTC
Created attachment 971760 [details]
engine log

The button to create the VM was pressed around 2016-12-16 11:17.

Comment 5 Tiemen Ruiten 2015-01-07 07:18:48 UTC

*** This bug has been marked as a duplicate of bug 1145241 ***

Comment 6 Tiemen Ruiten 2015-01-07 07:21:23 UTC
The cluster didn't have a default disk policy. Once a policy was created, the problem did not occur when adding a VM or creating a disk.