Bug 847833

Summary: [RHEVM Installation] The setup does not create the vdsm:kvm user anymore, this caused ISO-uploader to fail on permission.
Product: Red Hat Enterprise Virtualization Manager Reporter: Simon Grinberg <sgrinber>
Component: ovirt-engine-setupAssignee: Juan Hernández <juan.hernandez>
Status: CLOSED CURRENTRELEASE QA Contact: Ilanit Stein <istein>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 3.1.0CC: acathrow, alourie, asegundo, bazulay, dyasny, iheim, knesenko, kroberts, Rhev-m-bugs, sgordon, ykaul
Target Milestone: ---Keywords: Regression
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: integration
Fixed In Version: si15 Doc Type: Release Note
Doc Text:
During Manager installation it is possible to configure a local ISO storage domain, exported using NFS. Previously, if this option was selected the NFS export was configured correctly but the user and group required for the Manager to access it were not. The missing user and group are now created during Manager installation.
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-12-04 20:00:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 853715, 854703, 854721    
Bug Blocks:    

Description Simon Grinberg 2012-08-13 17:23:05 UTC
Description of problem:
There is a known NFS4 issues that caused the mount point to export nobody:nobody as the user. 

When selecting to create local NFS share as an ISO domain via the installer, the installer uses default options. This means NFS 4. The ISO domain is now read only as far as the user is concerned. 

This caused the ISO uploader to fail. 

Version-Release number of selected component (if applicable):
SI13.2 (thou the share was created with SI12, but I did not see any change/BZ regarding that)

How reproducible:
Always 

Steps to Reproduce:
1. Allow the installer to create 
2. Try to upload an image
3.
  
Actual results:
Failure with the message 
ERROR: A user named vdsm with a UID and GID of 36 must be defined on the system to mount the ISO storage domain on ISO_DOMAIN as Read/Write

[root@rhevm31 ~]# rhevm-iso-uploader -v upload -i ISO_DOMAIN ttt.iso
Please provide the REST API password for the admin@internal RHEV-M user (CTRL+D to abort): 
DEBUG: API Vendor(Red Hat)	API Version(3.1.0)
DEBUG: id=061dfcf6-f255-4e54-8bff-5b06a52af4c0 address=rhevm31.demo.redhat.com path=/usr/local/exports/iso
DEBUG: local NFS mount point is /tmp/tmpSR_b_I
DEBUG: NFS mount command (/bin/mount -t nfs -o rw,sync,soft rhevm31.demo.redhat.com:/usr/local/exports/iso /tmp/tmpSR_b_I)
DEBUG: /bin/mount -t nfs -o rw,sync,soft rhevm31.demo.redhat.com:/usr/local/exports/iso /tmp/tmpSR_b_I
DEBUG: _cmds(['/bin/mount', '-t', 'nfs', '-o', 'rw,sync,soft', 'rhevm31.demo.redhat.com:/usr/local/exports/iso', '/tmp/tmpSR_b_I'])
DEBUG: returncode(0)
DEBUG: STDOUT()
DEBUG: STDERR()
ERROR: A user named vdsm with a UID and GID of 36 must be defined on the system to mount the ISO storage domain on ISO_DOMAIN as Read/Write
DEBUG: /bin/umount -t nfs -f  /tmp/tmpSR_b_I
DEBUG: /bin/umount -t nfs -f  /tmp/tmpSR_b_I
DEBUG: _cmds(['/bin/umount', '-t', 'nfs', '-f', '/tmp/tmpSR_b_I'])
DEBUG: returncode(0)
DEBUG: STDOUT()
DEBUG: STDERR()

Mounting with the same mount command shows the problem:

[root@rhevm31 ~]# /bin/mount -t nfs -o rw,sync,soft rhevm31.demo.redhat.com:/usr/local/exports/iso /mnt
[root@rhevm31 ~]# ll /mnt
drwxr-xr-x. 4 nobody nobody 4096 Aug  5 11:14 061dfcf6-f255-4e54-8bff-5b06a52af4c0

Comment 1 Simon Grinberg 2012-08-13 17:25:09 UTC
Existing workarounds 

Manually fix the export to NFS 3
Manually copy the file to the ISO domain and set permissions to user 36:36

Comment 4 Simon Grinberg 2012-08-13 17:43:48 UTC
(In reply to comment #3)
> - Please attach installation log (and the relevant /etc/exports file)
There is no problem in installation - goes smoothly

> - do we want the export to be V4 only? Don't think so.
The problem is the export line which shows: 
[root@rhevm31 ~]# cat /etc/exports 
/usr/local/exports/iso	0.0.0.0/0.0.0.0(rw)	#rhev installer

That default to NFS4 thus when using the mount command as done by the iso_uploader you get:
[root@rhevm31 ~]# mount
.
.
.
rhevm31.demo.redhat.com:/usr/local/exports/iso on /mnt type nfs (rw,sync,soft,vers=4,addr=23.2.2.60,clientaddr=23.2.2.60)


And on the hosts, after activating the ISO domain 

rhevm31.demo.redhat.com:/usr/local/exports/iso on /rhev/data-center/mnt/rhevm31.demo.redhat.com:_usr_local_exports_iso type nfs (rw,soft,nosharecache,timeo=600,retrans=6,vers=4,addr=23.2.2.60,clientaddr=23.2.2.25)
ll /rhev/data-center/mnt/rhevm31.demo.redhat.com:_usr_local_exports_iso/061dfcf6-f255-4e54-8bff-5b06a52af4c0
total 8
drwxr-xr-x. 2 nobody nobody 4096 Jul 31 20:00 dom_md
drwxr-xr-x. 3 nobody nobody 4096 Jul 30 19:03 images

Comment 5 Yaniv Kaul 2012-08-13 18:28:10 UTC
(In reply to comment #4)
> (In reply to comment #3)
> > - Please attach installation log (and the relevant /etc/exports file)
> There is no problem in installation - goes smoothly
> 
> > - do we want the export to be V4 only? Don't think so.
> The problem is the export line which shows: 
> [root@rhevm31 ~]# cat /etc/exports 
> /usr/local/exports/iso	0.0.0.0/0.0.0.0(rw)	#rhev installer
> 
> That default to NFS4 thus when using the mount command as done by the
> iso_uploader you get:

I thought it defaults to auto-negotiate.

> [root@rhevm31 ~]# mount
> .
> .
> .
> rhevm31.demo.redhat.com:/usr/local/exports/iso on /mnt type nfs
> (rw,sync,soft,vers=4,addr=23.2.2.60,clientaddr=23.2.2.60)
> 
> 
> And on the hosts, after activating the ISO domain 
> 
> rhevm31.demo.redhat.com:/usr/local/exports/iso on
> /rhev/data-center/mnt/rhevm31.demo.redhat.com:_usr_local_exports_iso type
> nfs
> (rw,soft,nosharecache,timeo=600,retrans=6,vers=4,addr=23.2.2.60,
> clientaddr=23.2.2.25)
> ll
> /rhev/data-center/mnt/rhevm31.demo.redhat.com:_usr_local_exports_iso/
> 061dfcf6-f255-4e54-8bff-5b06a52af4c0
> total 8
> drwxr-xr-x. 2 nobody nobody 4096 Jul 31 20:00 dom_md
> drwxr-xr-x. 3 nobody nobody 4096 Jul 30 19:03 images

Comment 6 Itamar Heim 2012-08-13 19:26:25 UTC
i don't understand why it would be an issue to have nfsv4 - isouploader should support it as well

Comment 7 Keith Robertson 2012-08-13 21:00:11 UTC
(In reply to comment #6)
> i don't understand why it would be an issue to have nfsv4 - isouploader
> should support it as well

Pretty sure this has nothing to do with NFSv4.  The error log is clearly stating that the system upon which the ISO uploader is running doesn't have a UID and GID of 36 [1].  

You need a UID and GID of 36:36 on the local system so that you can actually write to files on the export.  NFS checks the UID/GID of the mounting application against the files being exported and applies standard *nix permissions.  If I mount an NFS export as a user with a UID/GID combination of 500:500 and the export only has 'write' perms for users of 36:36 I will get 'Permission denied' if I try to write a file on the export.

You can solve this in 2 ways:
 1 - Pin the export to a particular UID/GID combination (very useful for NAS devices).  This is accomplished on the NFS server by additional options on the export line (e.g. /exports/iso       *(rw,anonuid=500,anongid=500) ).  When you pin with anonuid/anongid, it doesn't matter what the client mounts as.  The NFS server will completely ignore it and r/w as the pinned uid/gid.

 2 - Write files on the filesystem as the ID being exported.

As far as I can tell, this is not a bug.  You need to either create a local UID/GID combination of 36:36 or pin the exported filesystem.


[1] ERROR: A user named vdsm with a UID and GID of 36 must be defined on the system to mount the ISO storage domain on ISO_DOMAIN as Read/Write

Comment 8 Keith Robertson 2012-08-13 21:13:54 UTC
Follow up...

You might be wondering how this actually happens in the ISO uploader...
Step 1: mount export as root
Step 2: Open filehandle to source file as root.
Step 3: seteuid/egid to 36:36.  This will fail if there is no local user/group of 36:36.  <--- You are here.
Step 4: You are now 36:36.  Open filehandle on mountpoint.
Step 5: pipe source to dest.
Step 6: Set euid/egid back to 0
Step 7: Close all file handles.

Comment 9 Ayal Baron 2012-08-13 21:22:39 UTC
I don't understand why this is a bug.
The nobody:nobody issue is a misconfiguration.  It happens when your nfs client and your nfs server reside on different domains.  You can fix this by editing /etc/idmapd.conf and setting domain to be equal to the one on the nfs server.
and then running idmapd -c iirc.

Comment 10 Simon Grinberg 2012-08-14 06:57:45 UTC
(In reply to comment #9)
> I don't understand why this is a bug.
> The nobody:nobody issue is a misconfiguration.  It happens when your nfs
> client and your nfs server reside on different domains.  

Which domain? A directory services domain? Are we forcing this now? 

> You can fix this by
> editing /etc/idmapd.conf 

On the server side? Hosts? both? 

> and setting domain to be equal to the one on the
> nfs server.
> and then running idmapd -c iirc.

The reasons this is a back is that this NFSshare is configured by rhevm-setup.
Meaning, 
rhevm-setup creates the directory, sets the permission, configures the NFS export, and adds this share as an ISO domain in RHEV Manager. 
 
And this worked well in 3.0 and now it does not, meaning from user perspective this is a regression. 

So either add the proper configuration in /etc/idmapd.conf or configure the share to NFS3 or if it's auto negotiation as Kaul mentioned in comment #5 the provide the proper parameters to the ISO domain upon create. Whatever goes as long as it works as before.

Comment 11 Simon Grinberg 2012-08-14 07:01:27 UTC
(In reply to comment #10)
Correction 
> The reasons this is a back is that this NFSshare is configured by
s/back/bug;

> rhevm-setup.
> Meaning, 
> rhevm-setup creates the directory, sets the permission, configures the NFS
> export, and adds this share as an ISO domain in RHEV Manager. 
>  
> And this worked well in 3.0 


This is why I opened the back on the installer and not on the uploader - it's not an uploader issue - All the hosts see this domain as read only since the user shows nobody:nobody and not the proper permissions for the vdsm user.

Comment 13 Keith Robertson 2012-08-14 10:39:07 UTC
(In reply to comment #10)
> (In reply to comment #9)
> > I don't understand why this is a bug.
> > The nobody:nobody issue is a misconfiguration.  It happens when your nfs
> > client and your nfs server reside on different domains.  
> 
snip
> Whatever goes as long as it works as before.

So, previously the installer would create a UID/GID combination of 36:36 on the RHEV-M.  Is that not happening now?

Another thing that I just noticed, after looking at the logs again, was that we're exporting /usr/local/exports/iso as r/w. IMHO exporting anything in /usr as r/w is a bad idea.  If the installer is going to set up an export for /usr it should be 'read' only and not r/w.  R/W should be in /var or something.

WRT to the autonegotiate comment, I'm pretty sure autonegotiate is only related to NFS version, and that it wouldn't solve issues with write permissions.

Comment 14 Simon Grinberg 2012-08-14 15:36:16 UTC
(In reply to comment #13)

> So, previously the installer would create a UID/GID combination of 36:36 on
> the RHEV-M.  Is that not happening now?

I've talked with Ofer few weeks back and he said the script never created that user. 

However on rhev 3.0 rhevm I get: 

cat /etc/passwd | egrep 'vdsm|36'
vdsm:x:36:36:RHEV node manager:/:/sbin/nologin

while on rhev 3.1 I get nothing. 

So I've tried to add the user: 
groupadd -g 36 kvm
useradd -g 36 -u 36 vdsm -s /sbin/nologin -M

And now restarted the nfs service and looking good:
service nfs restart
/bin/mount -t nfs -o rw,sync,soft rhevm31.demo.redhat.com:/usr/local/exports/iso /mnt
 ll /mnt
drwxr-xr-x. 4 vdsm kvm 4096 Aug  5 11:14 061dfcf6-f255-4e54-8bff-5b06a52af4c0

And Iso uploader works like a charm:

DEBUG: Size of ttt.iso:	13 bytes	0.0 1K-blocks	0.0 MB
DEBUG: Available space in /tmp/tmpYiyYek/061dfcf6-f255-4e54-8bff-5b06a52af4c0/images/11111111-1111-1111-1111-111111111111:	8628731904 bytes	8426496.0 1K-blocks	8229.0 MB
DEBUG: euid(0) egid(0)


So it's not the domain issue that shows the nobody user but the fact that the rhevm-setup script does not create the vdsm user anymore. It just looks the same.

Thanks, Kieth.
And Steve for suggesting that if the user does not exist on the mounting machine it will not work. 


Kieth this however raised a different issue - it assumes that the ISO domain is always accessible to the RHEV-manager, that is something we did not enforce in 2.2 since we first copied the image to the host and then moved it to the ISO domain.

Comment 16 Stephen Gordon 2012-08-14 16:07:46 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
During Manager installation it is possible to configure a local ISO storage domain, exported using NFS. Currently, if this option is selected the NFS export is configured correctly but the user and group required for the Manager to access it are not. 

To work around this issue:

* Add the <systemitem>kvm</systemitem> group, with a group identifier (GID) of 36.
* Add the <systemitem>vdsm</systemitem> user, with a user identifier (UID) of 36.

Comment 17 Keith Robertson 2012-08-14 16:22:43 UTC

(In reply to comment #14)
> (In reply to comment #13)
> 
> > So, previously the installer would create a UID/GID combination of 36:36 on
> > the RHEV-M.  Is that not happening now?
> 
snip>
> 
> Thanks, Kieth.
YW
> And Steve for suggesting that if the user does not exist on the mounting
> machine it will not work. 
> 
> 
> Kieth this however raised a different issue - it assumes that the ISO domain
> is always accessible to the RHEV-manager, that is something we did not
> enforce in 2.2 since we first copied the image to the host and then moved it
> to the ISO domain.

Well I've been thinking about this too.  In truth, there is no requirement that the ISO uploader or Image uploader actually reside on the same system as the RHEV-M.  

To make both tools more portable, we could...
1- Create uid/gid 36:36 in the %pre section of the ISO uploader and Image uploader's RPM spec.
2- Package them in the same channel as ovirt-engine-sdk (where ever that is)
3- User can manually edit config files.

Comment 18 Stephen Gordon 2012-08-14 16:29:25 UTC
    Technical note updated. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    Diffed Contents:
@@ -2,5 +2,5 @@
 
 To work around this issue:
 
-* Add the <systemitem>kvm</systemitem> group, with a group identifier (GID) of 36.
+* Manually add the <systemitem>kvm</systemitem> group, with a group identifier (GID) of 36.
-* Add the <systemitem>vdsm</systemitem> user, with a user identifier (UID) of 36.+* Manually add the <systemitem>vdsm</systemitem> user, with a user identifier (UID) of 36.

Comment 19 Alon Bar-Lev 2012-08-15 11:21:39 UTC
(In reply to comment #17)
> To make both tools more portable, we could...
> 1- Create uid/gid 36:36 in the %pre section of the ISO uploader and Image
> uploader's RPM spec.
> 2- Package them in the same channel as ovirt-engine-sdk (where ever that is)
> 3- User can manually edit config files.

Creating the vdsm user at ovirt-engine-sdk will provide a solution for the ovirt-engine as well as this is a dependency.

Is this acceptable solution?

Comment 20 Keith Robertson 2012-08-15 13:52:40 UTC
(In reply to comment #19)
> (In reply to comment #17)
> > To make both tools more portable, we could...
> > 1- Create uid/gid 36:36 in the %pre section of the ISO uploader and Image
> > uploader's RPM spec.
> > 2- Package them in the same channel as ovirt-engine-sdk (where ever that is)
> > 3- User can manually edit config files.
> 
> Creating the vdsm user at ovirt-engine-sdk will provide a solution for the
> ovirt-engine as well as this is a dependency.
> 
> Is this acceptable solution?

That's a pretty good idea.  All of the tools have ovirt-engine-sdk/rhevm-sdk as a dependency. So, putting it there would keep us from having to duplicate it in the spec files of the ISO uploader and image uploader.

Also, it seems reasonable that the SDK create the user since oVirt/RHEVM expect it to exist.

Cheers

Comment 21 Itamar Heim 2012-08-15 14:06:28 UTC
but how is vdsm user related to the ovirt-engine-sdk?
if i just install it and the cli on my client machine, why should i get a vdsm user?

Comment 22 Keith Robertson 2012-08-15 14:15:32 UTC
(In reply to comment #21)
> but how is vdsm user related to the ovirt-engine-sdk?
> if i just install it and the cli on my client machine, why should i get a
> vdsm user?

It is tangentially related because of other components within oVirt/RHEV.  

I understand the position that the SDK itself doesn't need 36:36 and I considered it.  It is a trade-off.  Do we put the UID/GID creation code in every RPM that needs to do NFS related activities, or do we centrally locate this logic in a common module?

Comment 23 Alon Bar-Lev 2012-08-15 14:27:43 UTC
(In reply to comment #21)
> but how is vdsm user related to the ovirt-engine-sdk?
> if i just install it and the cli on my client machine, why should i get a
> vdsm user?

The question is how much popular (common) the is nfs functionality for users of the sdk. If it is common, the most trivial is to add the user when sdk is installed.

Comment 24 Keith Robertson 2012-08-15 14:36:53 UTC
(In reply to comment #23)
> (In reply to comment #21)
> > but how is vdsm user related to the ovirt-engine-sdk?
> > if i just install it and the cli on my client machine, why should i get a
> > vdsm user?
> 
> The question is how much popular (common) the is nfs functionality for users
> of the sdk. If it is common, the most trivial is to add the user when sdk is
> installed.

Hate to suggest this ... but maybe a dummy rpm that all other RPMs wanting that ID should require (e.g. vdsm, rhevm, LC, iso uploader, I have no idea what else).  One benefit of such an RPM would be that it would give a central location for the UID and GID and possibly whatever else is common across the oVirt ecosystem.

Comment 25 Alon Bar-Lev 2012-08-15 14:40:57 UTC
(In reply to comment #24)
> (In reply to comment #23)
> > (In reply to comment #21)
> > > but how is vdsm user related to the ovirt-engine-sdk?
> > > if i just install it and the cli on my client machine, why should i get a
> > > vdsm user?
> > 
> > The question is how much popular (common) the is nfs functionality for users
> > of the sdk. If it is common, the most trivial is to add the user when sdk is
> > installed.
> 
> Hate to suggest this ... but maybe a dummy rpm that all other RPMs wanting
> that ID should require (e.g. vdsm, rhevm, LC, iso uploader, I have no idea
> what else).  One benefit of such an RPM would be that it would give a
> central location for the UID and GID and possibly whatever else is common
> across the oVirt ecosystem.

I think that rpm is more or less the sdk... (except for the vdsm it-self).

I need a decision.

Comment 26 Keith Robertson 2012-08-15 14:54:44 UTC
Personally, I'd like to see this done in a common spot and right now that common spot appears to be the SDK.(In reply to comment #25)
> (In reply to comment #24)
> > (In reply to comment #23)
> > > (In reply to comment #21)
> > > > but how is vdsm user related to the ovirt-engine-sdk?
> > > > if i just install it and the cli on my client machine, why should i get a
> > > > vdsm user?
> > > 
> > > The question is how much popular (common) the is nfs functionality for users
> > > of the sdk. If it is common, the most trivial is to add the user when sdk is
> > > installed.
> > 
> > Hate to suggest this ... but maybe a dummy rpm that all other RPMs wanting
> > that ID should require (e.g. vdsm, rhevm, LC, iso uploader, I have no idea
> > what else).  One benefit of such an RPM would be that it would give a
> > central location for the UID and GID and possibly whatever else is common
> > across the oVirt ecosystem.
> 
> I think that rpm is more or less the sdk... (except for the vdsm it-self).
> 
> I need a decision.

I don't have the authority to make a decision here, but I vote for a common spot.

Comment 27 Alon Bar-Lev 2012-08-15 14:59:49 UTC
I need decision for someone with authority :)

Comment 28 Juan Hernández 2012-08-16 10:00:29 UTC
The proposed solution to fix the issue is to add the creation of the vdsm user and kvm group to the ovirt-engine-setup package. I mistakenly removed it with the changes for Fedora 17. The change is available here:

http://gerrit.ovirt.org/7247

I need the acks to merge it.

Comment 29 Simon Grinberg 2012-08-16 18:01:47 UTC
(In reply to comment #28)
> The proposed solution to fix the issue is to add the creation of the vdsm
> user and kvm group to the ovirt-engine-setup package. I mistakenly removed
> it with the changes for Fedora 17. The change is available here:
> 
> http://gerrit.ovirt.org/7247
> 
> I need the acks to merge it.

Guys, I think this is the right solution, to fix it back in the installer.

Kieth, Alon,
If someone wishes to use a different machine for the uploader, then they are welcomed to:

Either use one of the hosts in the data center, which is a very reasonable course of action. If the RHEV Manager does not have access to the storage then the host will certainly has to have it. In this case you should identify that and use the current mount done by the host.

or,

Let them create the user on that machine themselves, they know the requirements.

Comment 30 Keith Robertson 2012-08-16 19:03:55 UTC
(In reply to comment #29)
> (In reply to comment #28)
> > The proposed solution to fix the issue is to add the creation of the vdsm
> > user and kvm group to the ovirt-engine-setup package. I mistakenly removed
> > it with the changes for Fedora 17. The change is available here:
> > 
> > http://gerrit.ovirt.org/7247
> > 
> > I need the acks to merge it.
> 
> Guys, I think this is the right solution, to fix it back in the installer.
Agree.
> 
> Kieth, Alon,
> If someone wishes to use a different machine for the uploader, then they are
> welcomed to:
> 
> Either use one of the hosts in the data center, which is a very reasonable
> course of action. If the RHEV Manager does not have access to the storage
> then the host will certainly has to have it. In this case you should
> identify that and use the current mount done by the host.
> 
> or,
> 
> Let them create the user on that machine themselves, they know the
> requirements.

Sort of agree with that. IMO, we should endeavour to make it as easy as possible to use the tools. It can only help our customers.  So, that said I'll try to think of something to make it easy to use the tools from a system w/o RHEVM

Comment 31 Juan Hernández 2012-08-17 11:37:19 UTC
The proposed change has been merged upstream.

Comment 33 Ilanit Stein 2012-09-02 11:19:06 UTC
Checked in SI16:

Cannot test it since there's a new bug that prevents command from running:
https://bugzilla.redhat.com/show_bug.cgi?id=853715

Comment 34 Ilanit Stein 2012-09-03 12:23:17 UTC
Verified on SI16, using "certificate" workaround mentioned in bug https://bugzilla.redhat.com/show_bug.cgi?id=853715 (comment 4):

[root@lilach-rhel /]# rhevm-iso-uploader -v upload -i ISO_DOMAIN test.iso
Please provide the REST API password for the admin@internal RHEV-M user (CTRL+D to abort): 
DEBUG: API Vendor(Red Hat)	API Version(3.1.0)
DEBUG: id=c4592f74-e539-485d-a766-6dc3b65b612f address=lilach-rhel.qa.lab.tlv.redhat.com path=/usr/local/exports/iso_2012_09_03_14_14_32
DEBUG: local NFS mount point is /tmp/tmpjo4FVD
DEBUG: NFS mount command (/bin/mount -t nfs -o rw,sync,soft lilach-rhel.qa.lab.tlv.redhat.com:/usr/local/exports/iso_2012_09_03_14_14_32 /tmp/tmpjo4FVD)
DEBUG: /bin/mount -t nfs -o rw,sync,soft lilach-rhel.qa.lab.tlv.redhat.com:/usr/local/exports/iso_2012_09_03_14_14_32 /tmp/tmpjo4FVD
DEBUG: _cmds(['/bin/mount', '-t', 'nfs', '-o', 'rw,sync,soft', 'lilach-rhel.qa.lab.tlv.redhat.com:/usr/local/exports/iso_2012_09_03_14_14_32', '/tmp/tmpjo4FVD'])
DEBUG: returncode(0)
DEBUG: STDOUT()
DEBUG: STDERR()
DEBUG: Size of test.iso:	0 bytes	0.0 1K-blocks	0.0 MB
DEBUG: Available space in /tmp/tmpjo4FVD/c4592f74-e539-485d-a766-6dc3b65b612f/images/11111111-1111-1111-1111-111111111111:	5874647040 bytes	5736960.0 1K-blocks	5602.0 MB
DEBUG: euid(0) egid(0)
DEBUG: euid(0) egid(0)
WARNING: failed to refresh the list of files available in the ISO_DOMAIN ISO storage domain. Please refresh the list manually using the 'Refresh' button in the RHEV-M Webadmin console.
DEBUG: 
status: 400
reason: Bad Request
detail: Error connecting to the Storage Pool Manager service.
Possible reasons:
 - Storage Pool Manager service is in non-active state.
 - No Active Host in the Data Center.
DEBUG: /bin/umount -t nfs -f  /tmp/tmpjo4FVD
DEBUG: /bin/umount -t nfs -f  /tmp/tmpjo4FVD
DEBUG: _cmds(['/bin/umount', '-t', 'nfs', '-f', '/tmp/tmpjo4FVD'])
DEBUG: returncode(0)
DEBUG: STDOUT()
DEBUG: STDERR()
[root@lilach-rhel /]# /bin/mount -t nfs -o rw,sync,soft lilach-rhel.qa.lab.tlv.redhat.com:/usr/local/exports/iso /mnt
[root@lilach-rhel /]# ll /mnt
total 4
drwxr-xr-x. 4 vdsm kvm 4096 Sep  2 12:00 20d8c45c-f2f5-449a-949f-3a554b0dc228

Comment 35 Itamar Heim 2012-09-12 15:43:45 UTC
*** Bug 856689 has been marked as a duplicate of this bug. ***