Description of problem: Gluster v5.5 (oVirt 4.3.2) fails to create a snapshot when the gluster bricks have an ".automount" unit. Version-Release number of selected component (if applicable): glusterfs-5.5-1.el7.x86_64 glusterfs-api-5.5-1.el7.x86_64 glusterfs-api-devel-5.5-1.el7.x86_64 glusterfs-cli-5.5-1.el7.x86_64 glusterfs-client-xlators-5.5-1.el7.x86_64 glusterfs-coreutils-0.2.0-1.el7.x86_64 glusterfs-devel-5.5-1.el7.x86_64 glusterfs-events-5.5-1.el7.x86_64 glusterfs-extra-xlators-5.5-1.el7.x86_64 glusterfs-fuse-5.5-1.el7.x86_64 glusterfs-geo-replication-5.5-1.el7.x86_64 glusterfs-libs-5.5-1.el7.x86_64 glusterfs-rdma-5.5-1.el7.x86_64 glusterfs-resource-agents-5.5-1.el7.noarch glusterfs-server-5.5-1.el7.x86_64 libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.6.x86_64 nfs-ganesha-gluster-2.7.2-1.el7.x86_64 python2-gluster-5.5-1.el7.x86_64 vdsm-gluster-4.30.11-1.el7.x86_64 How reproducible: Always. Steps to Reproduce: 1.Create brick mount & automount units Ex: [root@ovirt1 system]# systemctl cat gluster_bricks-isos.mount # /etc/systemd/system/gluster_bricks-isos.mount [Unit] Description=Mount glusterfs brick - ISOS Requires = vdo.service After = vdo.service Before = glusterd.service Conflicts = umount.target [Mount] What=/dev/mapper/gluster_vg_md0-gluster_lv_isos Where=/gluster_bricks/isos Type=xfs Options=inode64,noatime,nodiratime [Install] WantedBy=glusterd.service [root@ovirt1 system]# systemctl cat gluster_bricks-isos.automount # /etc/systemd/system/gluster_bricks-isos.automount [Unit] Description=automount for gluster brick ISOS [Automount] Where=/gluster_bricks/isos [Install] WantedBy=multi-user.target 2.Create a gluster volume on the bricks. Ex: Volume Name: isos Type: Replicate Volume ID: 9b92b5bd-79f5-427b-bd8d-af28b038ed2a Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1:/gluster_bricks/isos/isos Brick2: ovirt2:/gluster_bricks/isos/isos Brick3: ovirt3.localdomain:/gluster_bricks/isos/isos (arbiter) Options Reconfigured: cluster.granular-entry-heal: enable performance.strict-o-direct: on network.ping-timeout: 30 storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: off performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet nfs.disable: on performance.client-io-threads: off cluster.enable-shared-storage: enable 3.Create snapshot: gluster snapshot create isos-snap-2019-04-11 isos description TEST Actual results: Error in logs and console: [2019-04-12 07:56:54.526508] E [MSGID: 106077] [glusterd-snapshot.c:1882:glusterd_is_thinp_brick] 0-management: Failed to get pool name for device systemd-1 [2019-04-12 07:56:54.527509] E [MSGID: 106121] [glusterd-snapshot.c:2523:glusterd_snapshot_create_prevalidate] 0-management: Failed to pre validate [2019-04-12 07:56:54.527525] E [MSGID: 106024] [glusterd-snapshot.c:2547:glusterd_snapshot_create_prevalidate] 0-management: Snapshot is supported only for thin provisioned LV. Ensure that all bricks of isos are thinly provisioned LV. [2019-04-12 07:56:54.527539] W [MSGID: 106029] [glusterd-snapshot.c:8613:glusterd_snapshot_prevalidate] 0-management: Snapshot create pre-validation failed [2019-04-12 07:56:54.527552] W [MSGID: 106121] [glusterd-mgmt.c:147:gd_mgmt_v3_pre_validate_fn] 0-management: Snapshot Prevalidate Failed [2019-04-12 07:56:54.527568] E [MSGID: 106121] [glusterd-mgmt.c:1015:glusterd_mgmt_v3_pre_validate] 0-management: Pre Validation failed for operation Snapshot on local node [2019-04-12 07:56:54.527583] E [MSGID: 106121] [glusterd-mgmt.c:2377:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Pre Validation Failed Expected results: Gluster to exclude entries of type "autofs" in /proc/mounts and create the snapshot. Additional info: Disabling the automount units and restarting the mount units' fixes the issue.
This bug is moved to https://github.com/gluster/glusterfs/issues/997, and will be tracked there from now on. Visit GitHub issues URL for further details