Bug 1584663
Summary: | [RFE] Add functionality to virsh to mount nfs shares using commands like mountvers, nosuid, nodev. | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Jason <jbryant> | ||||
Component: | libvirt | Assignee: | John Ferlan <jferlan> | ||||
Status: | CLOSED ERRATA | QA Contact: | Meina Li <meili> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 7.7 | CC: | dyuan, fjin, jdenemar, jferlan, laine, lmen, lsurette, mkalinin, mtessun, Rhev-m-bugs, srevivo, xuzhang, yalzhang | ||||
Target Milestone: | rc | Keywords: | FutureFeature | ||||
Target Release: | 7.7 | ||||||
Hardware: | All | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | libvirt-4.5.0-13.el7 | Doc Type: | If docs needed, set a value | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2019-08-06 13:13:52 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Jason
2018-05-31 12:14:40 UTC
(In reply to Jason from comment #0) > Description of problem: Site is requesting that we add functionality in virsh when mounting storage pools to be able to designate commands like: mountvers, nosuid, nodev > Actual results: Presently when you use virsh to present the pool site confirms even when not using /etc/fstab or mount or autofs the vm sees that the storage pool is mounted as they used the automount command in virsh. But it does not allow to mount using the above options > Expected results: To be able to use virsh to designate the above commands. virsh is part of RHEL, not RHV. My goal/hope is to work on this for RHEL 7.7, but I cannot guarantee anything. From a quick read without putting too much thought into it, the desire is to be able to provide a list of "mount options" (e.g. -o or --options from mount(8)) in some manner. For NFS, it would mean some user provided string of options at very cursory/raw level. BTW: I have no idea what STIG means.... It's an FLA (four letter acronym) that I don't recognize! FWIW: 'pool-list' wouldn't be the command that would need to be altered - pool-create and pool-create-as provide the functionality to start the pool (in NFS terms, mount the target locally). The pool-define and pool-define-as provide the means to "save" that pool's XML which then in the future would allow autostart. So it's the pool-{create|define}[-as] operations that would need adjustment. Posted some patches upstream that will add the options, see https://www.redhat.com/archives/libvir-list/2018-December/msg00558.html The v1 patches were essentially NACK'ed because of a desire to not place/provide "undefined" or "arbitrary" options within the XML. A suggestion was made to use XML Namespaces. A new v2 set of patches as been posted based on the review feedback: https://www.redhat.com/archives/libvir-list/2019-January/msg00164.html After a few more iterations of patches v5 (https://www.redhat.com/archives/libvir-list/2019-January/msg01196.html) was accepted and pushed: Support of nosuid, noexec, and nodev the "default" standard for all mount target storage pools "fs" and "netfs" (nfs, cifs, and gluster). commit f00cde7f1133fee96dc13a80d7f402c704346974 Author: John Ferlan <jferlan> Date: Fri Jan 11 10:53:35 2019 -0500 storage: Add default mount options for fs/netfs storage pools ... Modify the command generation to add some default options to the fs/netfs storage pools based on the OS type. For Linux, it'll be the "nodev, nosuid, noexec". For FreeBSD, it'll be "nosuid, noexec". For others, just leave the options alone. Modify the storagepoolxml2argvtest to handle the fact that the same input XML could generate different output XML based on whether Linux, FreeBSD, or other was being built. Signed-off-by: John Ferlan <jferlan> Reviewed-by: Daniel P. Berrangé <berrange> $ git describe f00cde7f1133fee96dc13a80d7f402c704346974 v5.0.0-149-gf00cde7f11 $ Rather than using 'mountvers', new XML was added to the NFS pool <source>, such as: <source> <host name='localhost'> <dir path='/var/lib/libvirt/images'/> <format type='nfs'/> <protocol ver='3'/> </source> This equates to the 'nfsvers' option. Support was added via commit: commit 3d3647e14f680015495f0f4650df8a2c1e230ec8 Author: John Ferlan <jferlan> Date: Fri Jan 11 13:42:18 2019 -0500 storage: Add the nfsvers to the command line If protocolVer present, add the -o nfsvers=# to the command line for the NFS Storage Pool Signed-off-by: John Ferlan <jferlan> Reviewed-by: Daniel P. Berrangé <berrange> $ git describe 3d3647e14f680015495f0f4650df8a2c1e230ec8 v5.0.0-152-g3d3647e14f $ It will also be possible to use the '--source-protocol-ver' for the virsh "pool-create-as" or "pool-define-as" commands : commit a3dbaa364721ae7bc7b8ae700091bf05392818f7 Author: John Ferlan <jferlan> Date: Fri Jan 11 14:48:31 2019 -0500 virsh: Add source-protocol-ver for pool commands Allow the addition of the <protocol ver='n'/> to the provided XML. Signed-off-by: John Ferlan <jferlan> Reviewed-by: Daniel P. Berrangé <berrange> $ git describe a3dbaa364721ae7bc7b8ae700091bf05392818f7 v5.0.0-153-ga3dbaa3647 $ Hi John, I found the xmlns for fs(https://libvirt.org/schemas/storagepool/source/fs/1.0) and rbd(https://libvirt.org/schemas/storagepool/source/rbd/1.0) can't be accessed now. Will they be opened after this bug ON_QA or it has some problems now? Thank you. Ignore the xmlns stuff for QE purposes. The important thing for this bz is determining that the "nosuid, noexec, and nodev" options are added to the pool start and that the "nfsvers=#" can also be recognized and added via the virsh command. Digging through some patch history, I see that I noted to determine whether the flags were set for the running pool "... use either 'nfsstat -m' or 'grep $POOLNAME /proc/mounts' for the added Flags/options...". That's at least how I verified that the flags were added. FWIW: The xmlns stuff is an upstream and "development only" type mechanism to add/implement features based on how something works for flags/options that aren't managed by libvirt. It is expected someone would then create an RFE bz for whatever option or feature they were trying to add. The https://libvirt.org/formatstorage.html will describe how one would "use it". So if someone wanted "mumble" as a new NFS flag, they can use xmlns to make sure it works like they wanted. Then they'd create a RFE asking for "mumble" to be supported... Verified Version: libvirt-4.5.0-14.virtcov.el7.x86_64 qemu-kvm-rhev-2.12.0-26.el7.x86_64 Verified Steps: Scenario 1: pool-{create|define}[-as] a fs pool 1. Prepare a fs pool xml. # cat fs.pool <pool type='fs'> <name>fs</name> <source> <device path='/dev/sdb'/> </source> <target> <path>/mnt/fs</path> </target> </pool> 2. Define and start a fs pool. # virsh pool-define fs.pool Pool fs defined from fs.pool # virsh pool-start fs Pool fs started # virsh pool-list --all Name State Autostart ------------------------------------------- default active yes fs active no 3. Check the mount options. # mount | grep sdb /dev/sdb on /mnt/fs type ext4 (rw,nosuid,nodev,noexec,relatime,seclabel,stripe=32,data=ordered) 4. Destroy and undefine the pool. # virsh pool-destroy fs Pool fs destroyed # virsh pool-undefine fs Pool fs has been undefined 5. Create the fs pool and check the mount options. # virsh pool-create fs.pool Pool fs created from fs.pool # mount | grep sdb /dev/sdb on /mnt/fs type ext4 (rw,nosuid,nodev,noexec,relatime,seclabel,stripe=32,data=ordered) 6. Pool-define-as/pool-create-as a fs pool and check the mount options. # virsh pool-create-as fs --type fs --source-dev /dev/sdb --target /mnt/fs Pool fs created # mount | grep sdb /dev/sdb on /mnt/fs type ext4 (rw,nosuid,nodev,noexec,relatime,seclabel,stripe=32,data=ordered) Scenario2: pool-{create|define}[-as] a netfs pool with nfs and auto pool format type. 1. Prepare a netfs pool xml with nfs or auto format type. # cat nfs.pool <pool type='netfs'> <name>netfs</name> <source> <host name='10.73.194.27'/> <dir path='/vol/S3/libvirtmanual'/> <format type='nfs'/> ---change nfs to auto format to test <protocol ver='3'/> </source> <target> <path>/mnt</path> </target> </pool> 2. Define and start a nfs pool. # virsh pool-define nfs.pool Pool netfs defined from nfs.pool # virsh pool-start netfs Pool netfs started 3. Check the mount options. # mount | grep 10.73.194.27 10.73.194.27:/vol/S3/libvirtmanual on /mnt/nfs type nfs (rw,nosuid,nodev,noexec,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.73.194.27,mountvers=3,mountport=4046,mountproto=udp,local_lock=none,addr=10.73.194.27) 4. Destroy and undefine the pool. # virsh pool-destroy netfs Pool netfs destroyed # virsh pool-undefine netfs Pool netfs has been undefined 5. Create the nfs pool and check the mount options. # virsh pool-create nfs.pool Pool netfs created from nfs.pool # mount | grep 10.73.194.27 10.73.194.27:/vol/S3/libvirtmanual on /mnt/nfs type nfs (rw,nosuid,nodev,noexec,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.73.194.27,mountvers=3,mountport=4046,mountproto=udp,local_lock=none,addr=10.73.194.27) 6. Pool-define-as/pool-create-as a netfs pool with nfs/auto format type. # virsh pool-create-as netfs --type netfs --source-host 10.73.194.27 --source-path /vol/S3/libvirtmanual --source-format nfs --target /mnt/ --source-protocol-ver 3 Pool netfs created # mount | grep 10.73.194.27 10.73.194.27:/vol/S3/libvirtmanual on /mnt type nfs (rw,nosuid,nodev,noexec,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.73.194.27,mountvers=3,mountport=4046,mountproto=udp,local_lock=none,addr=10.73.194.27) Scenario 3: pool-{create|define}[-as] a netfs pool with glusterfs pool format type 1. Prepare a gluster server and prepare log to generate the useful logs. 2. Prepare a netfs pool xml with glusterfs format type. # cat glusterfs.pool <pool type='netfs'> <name>netfs</name> <source> <host name='10.66.7.232'/> <dir path='gv0'/> <format type='glusterfs'/> </source> <target> <path>/mnt/gluster</path> </target> </pool> 3. Define or Create the pool. # virsh pool-create glusterfs.pool Pool netfs created from glusterfs.pool # virsh pool-destroy netfs Pool netfs destroyed # virsh pool-define glusterfs.pool Pool netfs defined from glusterfs.pool # virsh pool-start netfs Pool netfs started 4. At the same time with step3, open another terminal to check the mount options. # tail -f /var/log/libvirtd_debug.log | grep mount 2019-04-25 07:32:16.545+0000: 30944: debug : virCommandRunAsync:2476 : About to run /usr/bin/mount -t glusterfs 10.66.7.232:gv0 /mnt/gluster -o nodev,nosuid,noexec,direct-io-mode=1 2019-04-25 07:32:23.898+0000: 30940: debug : virCommandRunAsync:2476 : About to run /usr/bin/umount /mnt/gluster 2019-04-25 07:32:35.217+0000: 30942: debug : virCommandRunAsync:2476 : About to run /usr/bin/mount -t glusterfs 10.66.7.232:gv0 /mnt/gluster -o nodev,nosuid,noexec,direct-io-mode=1 5. Pool-define/create-as a netfs pool with glusterfs format type. # virsh pool-create-as netfs --type netfs --source-host 10.66.7.232 --source-path gv0 --source-format glusterfs --target /mnt/gluster Pool netfs created # virsh pool-destroy netfs Pool netfs destroyed # virsh pool-define-as netfs --type netfs --source-host 10.66.7.232 --source-path gv0 --source-format glusterfs --target /mnt/gluster Pool netfs defined # virsh pool-start netfs Pool netfs started 6. At the same time with step5, open another terminal to check the mount options. # tail -f /var/log/libvirtd_debug.log | grep mount 2019-04-25 07:37:01.093+0000: 30944: debug : virCommandRunAsync:2476 : About to run /usr/bin/mount -t glusterfs 10.66.7.232:gv0 /mnt/gluster -o nodev,nosuid,noexec,direct-io-mode=1 2019-04-25 07:37:14.831+0000: 30942: debug : virCommandRunAsync:2476 : About to run /usr/bin/umount /mnt/gluster 2019-04-25 07:37:29.718+0000: 30941: debug : virCommandRunAsync:2476 : About to run /usr/bin/mount -t glusterfs 10.66.7.232:gv0 /mnt/gluster -o nodev,nosuid,noexec,direct-io-mode=1 Scenario 4: pool-{create|define}[-as] a netfs pool with cifs pool format type 1. Prepare a samba server to test. 2. Prepare a netfs pool xml with cifs format type. # cat cifs.pool <pool type='netfs'> <name>netfs1</name> <source> <host name='10.66.7.232'/> <dir path='common'/> <format type='cifs'/> </source> <target> <path>/mnt/cifs</path> </target> </pool> 3. Retest step3 - step6 in Scenario2 by using cifs. # tail -f /var/log/libvirtd_debug.log | grep mount 2019-04-25 09:24:00.220+0000: 30940: debug : virCommandRunAsync:2476 : About to run /usr/bin/mount -t cifs //10.66.7.232/common /mnt/cifs -o nodev,nosuid,noexec,guest 2019-04-25 09:24:35.095+0000: 30941: debug : virCommandRunAsync:2476 : About to run /usr/bin/umount /mnt/cifs 2019-04-25 09:24:41.445+0000: 30943: debug : virCommandRunAsync:2476 : About to run /usr/bin/mount -t cifs //10.66.7.232/common /mnt/cifs -o nodev,nosuid,noexec,guest Scenario 5: [negative] Create a netfs pool with glusterfs format which has not supported protocol ver 1. Prepare a netfs pool xml with glusterfs format type. # cat glusterfs.pool <pool type='netfs'> <name>netfs</name> <source> <host name='10.66.7.232'/> <dir path='gv0'/> <format type='glusterfs'/> <protocol ver='3'/> </source> <target> <path>/mnt/gluster</path> </target> </pool> 2. Create the pool. # virsh pool-create glusterfs.pool error: Failed to create pool from glusterfs.pool error: unsupported configuration: storage pool protocol ver unsupported for pool type 'glusterfs' Scenario 6: [negative] Create netfs pool with protocol which has invalid value 1. Prepare a netfs pool xml with nfs format type. # cat nfs.pool <pool type='netfs'> <name>netfs</name> <source> <host name='10.73.194.27'/> <dir path='/vol/S3/libvirtmanual'/> <format type='nfs'/> <protocol ver='a'/> </source> <target> <path>/mnt</path> </target> </pool> 2. Create the pool. # virsh pool-create nfs.pool error: Failed to create pool from nfs.pool error: XML error: storage pool protocol ver 'a' is malformaed Scenario 7: Start guest with the related nfs pool 1. Check the volume in netfs pool. # virsh vol-list netfs Name Path ------------------------------------------------------------------------------ test.img /mnt/nfs/test.img 2. Start guest with the following disk. # virsh edit guest … <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='netfs' volume='test.img'/> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> … # virsh start lmn Domain lmn started Created attachment 1559795 [details]
Code Coverage Report
Hi John, There's two small issues in this bug, please help to fix them again, thanks. Q1: there's a typo in error message. In scenario 6, the error message: storage pool protocol ver 'a' is malformaed. And should change 'malformaed' to 'malformed'. Q2: there's no --source-protocol-ver option in pool-define-as command in man page. Thanks - both of these are minor issues - the pool-define-as will work w/ --source-protocol-ver, it's just the virsh.pod for that command that wasn't updated. Fortunately, the pool-define-as command indicates: "Use the same arguments as B<pool-create-as>, except for the I<--build>, I<--overwrite>, and I<--no-overwrite> options." So even though the "--source-protocol-ver ver" wasn't listed there it can be inferred. Neither requires a change to fix the functionality and thus shouldn't stand in the way of verification. I've posted a couple of patches upstream to address, see: https://www.redhat.com/archives/libvir-list/2019-April/msg01472.html Patches were reviewed and pushed upstream as commit b97801f3 and a536088 Hi John, Will this new small patches be backported to RHEL-7.7? If yes, we need to change the bug status and track them, thanks. I don't think they need to be backported at this point. They are upstream and they will eventually get into RHEL via rebase. As Jiri noted, no. The functionality is there, it's just that the docs weren't as specific as they should be and the error message change is minor. According to Comment 18, move this bug to be verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:2294 |