Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1478272 Details for
Bug 1621436
Heketi returns HTTP code 500 when we try to delete expanded volumes in parallel
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
heketi_server.log
heketi_server.log (text/plain), 136.60 KB, created by
Valerii Ponomarov
on 2018-08-23 15:51:18 UTC
(
hide
)
Description:
heketi_server.log
Filename:
MIME Type:
Creator:
Valerii Ponomarov
Created:
2018-08-23 15:51:18 UTC
Size:
136.60 KB
patch
obsolete
>[heketi] INFO 2018/08/23 15:10:21 Starting Node Health Status refresh >[cmdexec] INFO 2018/08/23 15:10:21 Check Glusterd service status in node vp-ansible-v311-app-cns-0 >[kubeexec] DEBUG 2018/08/23 15:10:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:03 UTC; 1 day 5h ago > Process: 429 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 430 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod630cf184_a5f1_11e8_9a3f_005056a549ca.slice/docker-24ed69a095449460f61deded6aaa6a7de0753c88a3cf0a5e07b85b9cd4a92805.scope/system.slice/glusterd.service > ââ 430 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 852 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id heketidbstorage.10.70.46.26.var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.26-var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.pid -S /var/run/gluster/6a2a03eab842b37b7169d67e7d04d6c2.socket --brick-name /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_cdc9f126aeab5fc4af31b482980e213e/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ29575 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.pid -S /var/run/gluster/4d30a5ebbf6049f7294f2c96c0889f0e.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_3510a38a8d5d693a612baa22d916237b/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31589 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/13b5dab3bb9b7241888ca2821e42e2c0.socket --xlator-option *replicate*.node-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f >[heketi] INFO 2018/08/23 15:10:21 Periodic health check status: node 064f1f469119e5c69a56a8c81b5fd96a up=true >[cmdexec] INFO 2018/08/23 15:10:21 Check Glusterd service status in node vp-ansible-v311-app-cns-1 >[kubeexec] DEBUG 2018/08/23 15:10:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:55:53 UTC; 1 day 5h ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod631c8e73_a5f1_11e8_9a3f_005056a549ca.slice/docker-0c00732c129d13630118d564e7628c6b1a7329d974aa476e520ae3be5990a263.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 856 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id heketidbstorage.10.70.46.10.var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.10-var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.pid -S /var/run/gluster/eb9a06fb398d5e1d9578e0fc2dc82d85.socket --brick-name /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_55d03facb36899f57cdabd107689eba0/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ29683 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.pid -S /var/run/gluster/43582dc96bce5442618f73c71cf50bbd.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_3dc24288734a047a8fcc00ab2c7a0974/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31513 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/97a465235ca3ff589c72ec017f4ab551.socket --xlator-option *replicate*.node-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 >[heketi] INFO 2018/08/23 15:10:22 Periodic health check status: node 284f3e3a4c2fd7c0e78b2e759afa9847 up=true >[cmdexec] INFO 2018/08/23 15:10:22 Check Glusterd service status in node vp-ansible-v311-app-cns-2 >[kubeexec] DEBUG 2018/08/23 15:10:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:11 UTC; 1 day 5h ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6310af79_a5f1_11e8_9a3f_005056a549ca.slice/docker-55cc0a963be6c53dd856f9025694fb7897b40fb9112aca2b194ddc65ec54d76f.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 842 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id heketidbstorage.10.70.47.176.var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.176-var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.pid -S /var/run/gluster/fb2b61aa838bbe43a440ffc2986e9bc7.socket --brick-name /var/lib/heketi/mounts/vg_b2c811c50490ee9832f1e0ecfb15f660/brick_7251b95b366cb335bf154a71d67f1250/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ29609 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.pid -S /var/run/gluster/a8eee47923e596454bd4f1c2e122327b.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_88fc94292cf739f1608b4abb3a42a5aa/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31591 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/645e4eea62d3f97263e806a381eeabda.socket --xlator-option *replicate*.node-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d >[heketi] INFO 2018/08/23 15:10:22 Periodic health check status: node f48c1fefdee6989a05560260afcd0a2d up=true >[heketi] INFO 2018/08/23 15:10:22 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/7c63529f0b15f298ebf42de88f10bc57 >[negroni] Completed 202 Accepted in 115.512441ms >[asynchttp] INFO 2018/08/23 15:10:42 asynchttp.go:288: Started job 78467496c7e45c44c8301f870f17c8bd >[heketi] INFO 2018/08/23 15:10:42 Started async operation: Delete Volume >[negroni] Started GET /queue/78467496c7e45c44c8301f870f17c8bd >[negroni] Completed 200 OK in 108.062µs >[kubeexec] DEBUG 2018/08/23 15:10:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: gluster --mode=script snapshot list vol_7c63529f0b15f298ebf42de88f10bc57 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_7c63529f0b15f298ebf42de88f10bc57) does not exist</opErrstr> ></cliOutput> >[heketi] WARNING 2018/08/23 15:10:42 not attempting to delete missing volume vol_7c63529f0b15f298ebf42de88f10bc57 >[heketi] INFO 2018/08/23 15:10:42 Deleting brick 64da05c4fddbb74eef04191bd51aa2e5 >[heketi] INFO 2018/08/23 15:10:42 Deleting brick a4e2a2ac1c2aa4a5940e31335c15646e >[heketi] INFO 2018/08/23 15:10:42 Deleting brick c862180c0bcd8b0ae95ef4a908b722dd >[kubeexec] ERROR 2018/08/23 15:10:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [umount /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5] on glusterfs-cns-hzqg6: Err[command terminated with exit code 32]: Stdout []: Stderr [umount: /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5: mountpoint not found >] >[cmdexec] ERROR 2018/08/23 15:10:42 /src/github.com/heketi/heketi/executors/cmdexec/brick.go:198: Unable to execute command on glusterfs-cns-hzqg6: umount: /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5: mountpoint not found >[kubeexec] ERROR 2018/08/23 15:10:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [umount /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e] on glusterfs-cns-qrfrz: Err[command terminated with exit code 32]: Stdout []: Stderr [umount: /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >] >[cmdexec] ERROR 2018/08/23 15:10:42 /src/github.com/heketi/heketi/executors/cmdexec/brick.go:198: Unable to execute command on glusterfs-cns-qrfrz: umount: /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[kubeexec] ERROR 2018/08/23 15:10:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [umount /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd] on glusterfs-cns-jlgq2: Err[command terminated with exit code 32]: Stdout []: Stderr [umount: /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd: mountpoint not found >] >[cmdexec] ERROR 2018/08/23 15:10:42 /src/github.com/heketi/heketi/executors/cmdexec/brick.go:198: Unable to execute command on glusterfs-cns-jlgq2: umount: /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd: mountpoint not found >[kubeexec] DEBUG 2018/08/23 15:10:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: mount >Result: overlay on / type overlay (rw,relatime,seclabel,lowerdir=/var/lib/docker/overlay2/l/U5XO5J2DCZNGUCP5R4MKXSTQNB:/var/lib/docker/overlay2/l/GSPEIWWX6AE2HLHRTHSAXNJ5UV:/var/lib/docker/overlay2/l/XQMHTRWR5Z7RQOSAAZPG5PYFQO:/var/lib/docker/overlay2/l/MBXYP3T6L66XZYLRZW54453HMT,upperdir=/var/lib/docker/overlay2/bd14b4ee8c93cd74e40735e7740631990e59549305b2ad4fbad99c9edabc33e0/diff,workdir=/var/lib/docker/overlay2/bd14b4ee8c93cd74e40735e7740631990e59549305b2ad4fbad99c9edabc33e0/work) >proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) >sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) >/dev/sdc on /run type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=16378916k,nr_inodes=4094729,mode=755) >tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel) >devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000) >hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel) >mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel) >/dev/mapper/docker--vol-dockerlv on /etc/resolv.conf type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >/dev/sdc on /dev/termination-log type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/glusterfs type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/target type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >tmpfs on /run/lvm type tmpfs (rw,nosuid,nodev,seclabel,mode=755) >shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,context="system_u:object_r:container_file_t:s0:c371,c614",size=65536k) >/dev/sdc on /etc/hosts type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/ssl type xfs (ro,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/docker--vol-dockerlv on /etc/hostname type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >/dev/mapper/docker--vol-dockerlv on /run/secrets type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/glusterd type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/log/glusterfs type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755) >cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) >cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices) >cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer) >cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuacct,cpu) >cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb) >cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids) >cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_prio,net_cls) >cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio) >cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event) >cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory) >cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset) >/dev/mapper/rhel_dhcp46--210-root on /usr/lib/modules type xfs (ro,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/misc/glusterfsd type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >tmpfs on /run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime,seclabel) >systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=24,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=84403) >debugfs on /sys/kernel/debug type debugfs (rw,relatime) >configfs on /sys/kernel/config type configfs (rw,relatime) >sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) >/dev/mapper/vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250 on /var/lib/heketi/mounts/vg_b2c811c50490ee9832f1e0ecfb15f660/brick_7251b95b366cb335bf154a71d67f1250 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >/dev/mapper/vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa on /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_88fc94292cf739f1608b4abb3a42a5aa type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >[cmdexec] WARNING 2018/08/23 15:10:42 brick path [/var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5] not mounted, assuming deleted >[kubeexec] DEBUG 2018/08/23 15:10:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: mount >Result: overlay on / type overlay (rw,relatime,seclabel,lowerdir=/var/lib/docker/overlay2/l/SXPAWETXSHQGNMVRHZJO4FBZVL:/var/lib/docker/overlay2/l/TZBWO4EQPSJF2XM7CWSUWH4MY2:/var/lib/docker/overlay2/l/BQTYMQYGPAKFJ7JDSYFKDB5KRA:/var/lib/docker/overlay2/l/T5YE2NA4OL27CYLFP3AG22I554,upperdir=/var/lib/docker/overlay2/8d56ecec848c4867bd68ea143df91805c04ff4a6fa66c805eff322c933ac00e0/diff,workdir=/var/lib/docker/overlay2/8d56ecec848c4867bd68ea143df91805c04ff4a6fa66c805eff322c933ac00e0/work) >proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) >sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) >devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=16378916k,nr_inodes=4094729,mode=755) >tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel) >devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000) >hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel) >mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel) >/dev/sdc on /run type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >/dev/mapper/docker--vol-dockerlv on /etc/resolv.conf type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >tmpfs on /run/lvm type tmpfs (rw,nosuid,nodev,seclabel,mode=755) >/dev/mapper/rhel_dhcp46--210-root on /etc/target type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/glusterfs type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/sdc on /dev/termination-log type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,context="system_u:object_r:container_file_t:s0:c542,c552",size=65536k) >/dev/sdc on /etc/hosts type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/ssl type xfs (ro,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/docker--vol-dockerlv on /etc/hostname type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >/dev/mapper/docker--vol-dockerlv on /run/secrets type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755) >cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) >cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuacct,cpu) >cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory) >cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset) >cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_prio,net_cls) >cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids) >cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio) >cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices) >cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer) >cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb) >cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/glusterd type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/log/glusterfs type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /usr/lib/modules type xfs (ro,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/misc/glusterfsd type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >tmpfs on /run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime,seclabel) >systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=24,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=84609) >configfs on /sys/kernel/config type configfs (rw,relatime) >debugfs on /sys/kernel/debug type debugfs (rw,relatime) >sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) >/dev/mapper/vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0 on /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_55d03facb36899f57cdabd107689eba0 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >/dev/mapper/vg_557622450e27d9663fd36087fbe35bee-brick_a4e2a2ac1c2aa4a5940e31335c15646e on /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >/dev/mapper/vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974 on /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_3dc24288734a047a8fcc00ab2c7a0974 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >[negroni] Started GET /queue/78467496c7e45c44c8301f870f17c8bd >[negroni] Completed 200 OK in 152.229µs >[kubeexec] DEBUG 2018/08/23 15:10:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mount >Result: overlay on / type overlay (rw,relatime,seclabel,lowerdir=/var/lib/docker/overlay2/l/OFUIYSJFBAYVM6K7KF5CGMCW3V:/var/lib/docker/overlay2/l/ITGEMGBNMOKYHHLYP2GVUYA5WO:/var/lib/docker/overlay2/l/LIUQEN2SQMRD6C2F2SVINNX6XJ:/var/lib/docker/overlay2/l/GIONCUIDKEIFJRKX7TAKGOKFUH,upperdir=/var/lib/docker/overlay2/a3dd718e2080efe6c1e8d7e459eeced151ca909d3e63720a60b68e24345428cc/diff,workdir=/var/lib/docker/overlay2/a3dd718e2080efe6c1e8d7e459eeced151ca909d3e63720a60b68e24345428cc/work) >proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) >sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) >/dev/sdc on /run type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=16378916k,nr_inodes=4094729,mode=755) >tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel) >devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000) >mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel) >hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel) >/dev/mapper/docker--vol-dockerlv on /etc/resolv.conf type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/target type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/ssl type xfs (ro,relatime,seclabel,attr2,inode64,noquota) >/dev/sdc on /dev/termination-log type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/glusterfs type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,context="system_u:object_r:container_file_t:s0:c304,c449",size=65536k) >/dev/sdc on /etc/hosts type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >tmpfs on /run/lvm type tmpfs (rw,nosuid,nodev,seclabel,mode=755) >/dev/mapper/docker--vol-dockerlv on /etc/hostname type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >/dev/mapper/docker--vol-dockerlv on /run/secrets type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/glusterd type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755) >cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) >cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids) >cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_prio,net_cls) >cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices) >cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset) >cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer) >cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuacct,cpu) >cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory) >cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event) >cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb) >cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio) >/dev/mapper/rhel_dhcp46--210-root on /var/log/glusterfs type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /usr/lib/modules type xfs (ro,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/misc/glusterfsd type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >tmpfs on /run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime,seclabel) >systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=23,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=87226) >debugfs on /sys/kernel/debug type debugfs (rw,relatime) >configfs on /sys/kernel/config type configfs (rw,relatime) >sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) >/dev/mapper/vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e on /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_cdc9f126aeab5fc4af31b482980e213e type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >/dev/mapper/vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b on /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_3510a38a8d5d693a612baa22d916237b type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >[cmdexec] WARNING 2018/08/23 15:10:43 brick path [/var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd] not mounted, assuming deleted >[kubeexec] DEBUG 2018/08/23 15:10:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: sed -i.save "/brick_64da05c4fddbb74eef04191bd51aa2e5/d" /var/lib/heketi/fstab >Result: >[heketi] ERROR 2018/08/23 15:10:43 /src/github.com/heketi/heketi/apps/glusterfs/brick_create.go:60: error destroying brick a4e2a2ac1c2aa4a5940e31335c15646e: Unable to execute command on glusterfs-cns-qrfrz: umount: /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[kubeexec] DEBUG 2018/08/23 15:10:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: sed -i.save "/brick_a4e2a2ac1c2aa4a5940e31335c15646e/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /blockvolumes >[negroni] Completed 200 OK in 313.951µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.050999ms >[negroni] Started GET /volumes/7c63529f0b15f298ebf42de88f10bc57 >[negroni] Completed 200 OK in 566.707µs >[negroni] Started GET /volumes/88eb861a7ad3f268c2d092be1287cef6 >[negroni] Completed 200 OK in 500.071µs >[negroni] Started GET /volumes/a833a9314f4557589a9d874105357140 >[negroni] Completed 200 OK in 1.4795ms >[negroni] Started GET /queue/78467496c7e45c44c8301f870f17c8bd >[negroni] Completed 200 OK in 99.038µs >[kubeexec] DEBUG 2018/08/23 15:10:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: sed -i.save "/brick_c862180c0bcd8b0ae95ef4a908b722dd/d" /var/lib/heketi/fstab >Result: >[cmdexec] WARNING 2018/08/23 15:10:44 did not delete missing lv: vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5 >[kubeexec] ERROR 2018/08/23 15:10:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [lvremove --autobackup=n -f vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5] on glusterfs-cns-hzqg6: Err[command terminated with exit code 5]: Stdout []: Stderr [ Failed to find logical volume "vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5" >] >[cmdexec] WARNING 2018/08/23 15:10:45 did not delete missing lv: vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd >[kubeexec] ERROR 2018/08/23 15:10:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [lvremove --autobackup=n -f vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd] on glusterfs-cns-jlgq2: Err[command terminated with exit code 5]: Stdout []: Stderr [ Failed to find logical volume "vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd" >] >[negroni] Started GET /queue/78467496c7e45c44c8301f870f17c8bd >[negroni] Completed 200 OK in 104.124µs >[cmdexec] WARNING 2018/08/23 15:10:45 unable to count lvs in missing thin pool: vg_91c1336b9d8010eb5c52368eef886671/tp_64da05c4fddbb74eef04191bd51aa2e5 >[kubeexec] ERROR 2018/08/23 15:10:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [lvs --noheadings --options=thin_count vg_91c1336b9d8010eb5c52368eef886671/tp_64da05c4fddbb74eef04191bd51aa2e5] on glusterfs-cns-hzqg6: Err[command terminated with exit code 5]: Stdout []: Stderr [ Failed to find logical volume "vg_91c1336b9d8010eb5c52368eef886671/tp_64da05c4fddbb74eef04191bd51aa2e5" >] >[cmdexec] WARNING 2018/08/23 15:10:46 unable to count lvs in missing thin pool: vg_b1ebb7cf4e45c57d379df092a591ec0a/tp_c862180c0bcd8b0ae95ef4a908b722dd >[kubeexec] ERROR 2018/08/23 15:10:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [lvs --noheadings --options=thin_count vg_b1ebb7cf4e45c57d379df092a591ec0a/tp_c862180c0bcd8b0ae95ef4a908b722dd] on glusterfs-cns-jlgq2: Err[command terminated with exit code 5]: Stdout []: Stderr [ Failed to find logical volume "vg_b1ebb7cf4e45c57d379df092a591ec0a/tp_c862180c0bcd8b0ae95ef4a908b722dd" >] >[cmdexec] WARNING 2018/08/23 15:10:46 did not delete missing thin pool: vg_91c1336b9d8010eb5c52368eef886671/tp_64da05c4fddbb74eef04191bd51aa2e5 >[kubeexec] ERROR 2018/08/23 15:10:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [lvremove --autobackup=n -f vg_91c1336b9d8010eb5c52368eef886671/tp_64da05c4fddbb74eef04191bd51aa2e5] on glusterfs-cns-hzqg6: Err[command terminated with exit code 5]: Stdout []: Stderr [ Failed to find logical volume "vg_91c1336b9d8010eb5c52368eef886671/tp_64da05c4fddbb74eef04191bd51aa2e5" >] >[negroni] Started GET /queue/78467496c7e45c44c8301f870f17c8bd >[negroni] Completed 200 OK in 98.345µs >[cmdexec] WARNING 2018/08/23 15:10:46 did not delete missing thin pool: vg_b1ebb7cf4e45c57d379df092a591ec0a/tp_c862180c0bcd8b0ae95ef4a908b722dd >[kubeexec] ERROR 2018/08/23 15:10:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [lvremove --autobackup=n -f vg_b1ebb7cf4e45c57d379df092a591ec0a/tp_c862180c0bcd8b0ae95ef4a908b722dd] on glusterfs-cns-jlgq2: Err[command terminated with exit code 5]: Stdout []: Stderr [ Failed to find logical volume "vg_b1ebb7cf4e45c57d379df092a591ec0a/tp_c862180c0bcd8b0ae95ef4a908b722dd" >] >[kubeexec] ERROR 2018/08/23 15:10:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [rmdir /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5] on glusterfs-cns-hzqg6: Err[command terminated with exit code 1]: Stdout []: Stderr [rmdir: failed to remove '/var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5': No such file or directory >] >[cmdexec] ERROR 2018/08/23 15:10:47 /src/github.com/heketi/heketi/executors/cmdexec/brick.go:279: Unable to execute command on glusterfs-cns-hzqg6: rmdir: failed to remove '/var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5': No such file or directory >[negroni] Started GET /queue/78467496c7e45c44c8301f870f17c8bd >[negroni] Completed 200 OK in 144.242µs >[kubeexec] ERROR 2018/08/23 15:10:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [rmdir /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd] on glusterfs-cns-jlgq2: Err[command terminated with exit code 1]: Stdout []: Stderr [rmdir: failed to remove '/var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd': No such file or directory >] >[cmdexec] ERROR 2018/08/23 15:10:47 /src/github.com/heketi/heketi/executors/cmdexec/brick.go:279: Unable to execute command on glusterfs-cns-jlgq2: rmdir: failed to remove '/var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd': No such file or directory >[heketi] ERROR 2018/08/23 15:10:47 /src/github.com/heketi/heketi/apps/glusterfs/brick_create.go:77: Unable to execute command on glusterfs-cns-qrfrz: umount: /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[heketi] ERROR 2018/08/23 15:10:47 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:683: Unable to delete bricks: Unable to execute command on glusterfs-cns-qrfrz: umount: /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[heketi] ERROR 2018/08/23 15:10:47 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:433: Error executing delete volume: Unable to execute command on glusterfs-cns-qrfrz: umount: /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[asynchttp] INFO 2018/08/23 15:10:47 asynchttp.go:292: Completed job 78467496c7e45c44c8301f870f17c8bd in 5.24332848s >[heketi] ERROR 2018/08/23 15:10:47 /src/github.com/heketi/heketi/apps/glusterfs/operations_manage.go:113: Delete Volume Failed: Unable to execute command on glusterfs-cns-qrfrz: umount: /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[negroni] Started GET /queue/78467496c7e45c44c8301f870f17c8bd >[negroni] Completed 500 Internal Server Error in 133.026µs >[negroni] Started POST /volumes >[heketi] INFO 2018/08/23 15:11:20 Allocating brick set #0 >[negroni] Started POST /volumes >[negroni] Completed 202 Accepted in 117.383245ms >[heketi] INFO 2018/08/23 15:11:20 Allocating brick set #0 >[asynchttp] INFO 2018/08/23 15:11:20 asynchttp.go:288: Started job b72af76011ad02421cc06aae03268601 >[heketi] INFO 2018/08/23 15:11:20 Started async operation: Create Volume >[heketi] INFO 2018/08/23 15:11:20 Creating brick 78962de2b5e56f6867e21f2608906bd4 >[negroni] Started GET /queue/b72af76011ad02421cc06aae03268601 >[negroni] Completed 200 OK in 95.539µs >[kubeexec] DEBUG 2018/08/23 15:11:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: mkdir -p /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_78962de2b5e56f6867e21f2608906bd4 >Result: >[negroni] Completed 202 Accepted in 217.986938ms >[asynchttp] INFO 2018/08/23 15:11:20 asynchttp.go:288: Started job 75c5311972934012e14a54e9c76538f5 >[heketi] INFO 2018/08/23 15:11:20 Started async operation: Create Volume >[heketi] INFO 2018/08/23 15:11:20 Creating brick 9bb81abeb00ec51c39ad767c63f718f6 >[negroni] Started GET /queue/75c5311972934012e14a54e9c76538f5 >[negroni] Completed 200 OK in 106.032µs >[kubeexec] DEBUG 2018/08/23 15:11:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_6c0538bc7bed0679f0f595c61c72656f/tp_bf8809b355339e7f39804e80780ec6fc --virtualsize 1048576K --name brick_78962de2b5e56f6867e21f2608906bd4 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_78962de2b5e56f6867e21f2608906bd4" created. >[negroni] Started GET /queue/b72af76011ad02421cc06aae03268601 >[negroni] Completed 200 OK in 109.72µs >[negroni] Started GET /queue/75c5311972934012e14a54e9c76538f5 >[negroni] Completed 200 OK in 106.569µs >[kubeexec] DEBUG 2018/08/23 15:11:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_6c0538bc7bed0679f0f595c61c72656f-brick_78962de2b5e56f6867e21f2608906bd4 >Result: meta-data=/dev/mapper/vg_6c0538bc7bed0679f0f595c61c72656f-brick_78962de2b5e56f6867e21f2608906bd4 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/08/23 15:11:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: awk "BEGIN {print \"/dev/mapper/vg_6c0538bc7bed0679f0f595c61c72656f-brick_78962de2b5e56f6867e21f2608906bd4 /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_78962de2b5e56f6867e21f2608906bd4 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/08/23 15:11:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_6c0538bc7bed0679f0f595c61c72656f-brick_78962de2b5e56f6867e21f2608906bd4 /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_78962de2b5e56f6867e21f2608906bd4 >Result: >[kubeexec] DEBUG 2018/08/23 15:11:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: mkdir /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_78962de2b5e56f6867e21f2608906bd4/brick >Result: >[cmdexec] INFO 2018/08/23 15:11:24 Creating volume rally-pp0w0lle2msoqd with no durability >[kubeexec] DEBUG 2018/08/23 15:11:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: mkdir -p /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_9bb81abeb00ec51c39ad767c63f718f6 >Result: >[negroni] Started GET /queue/b72af76011ad02421cc06aae03268601 >[negroni] Completed 200 OK in 175.98µs >[negroni] Started GET /queue/75c5311972934012e14a54e9c76538f5 >[negroni] Completed 200 OK in 174.689µs >[kubeexec] DEBUG 2018/08/23 15:11:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_6c0538bc7bed0679f0f595c61c72656f/tp_e3e64e8cc25beee2efd06880d2c8636a --virtualsize 1048576K --name brick_9bb81abeb00ec51c39ad767c63f718f6 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_9bb81abeb00ec51c39ad767c63f718f6" created. >[negroni] Started GET /queue/b72af76011ad02421cc06aae03268601 >[negroni] Completed 200 OK in 181.133µs >[negroni] Started GET /queue/75c5311972934012e14a54e9c76538f5 >[negroni] Completed 200 OK in 118.886µs >[kubeexec] DEBUG 2018/08/23 15:11:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_6c0538bc7bed0679f0f595c61c72656f-brick_9bb81abeb00ec51c39ad767c63f718f6 >Result: meta-data=/dev/mapper/vg_6c0538bc7bed0679f0f595c61c72656f-brick_9bb81abeb00ec51c39ad767c63f718f6 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/08/23 15:11:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: awk "BEGIN {print \"/dev/mapper/vg_6c0538bc7bed0679f0f595c61c72656f-brick_9bb81abeb00ec51c39ad767c63f718f6 /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_9bb81abeb00ec51c39ad767c63f718f6 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/08/23 15:11:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_6c0538bc7bed0679f0f595c61c72656f-brick_9bb81abeb00ec51c39ad767c63f718f6 /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_9bb81abeb00ec51c39ad767c63f718f6 >Result: >[kubeexec] DEBUG 2018/08/23 15:11:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: mkdir /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_9bb81abeb00ec51c39ad767c63f718f6/brick >Result: >[cmdexec] INFO 2018/08/23 15:11:27 Creating volume rally-h6rrrujhjr1bp5 with no durability >[kubeexec] DEBUG 2018/08/23 15:11:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: gluster --mode=script volume create rally-pp0w0lle2msoqd 10.70.46.10:/var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_78962de2b5e56f6867e21f2608906bd4/brick >Result: volume create: rally-pp0w0lle2msoqd: success: please start the volume to access data >[kubeexec] DEBUG 2018/08/23 15:11:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: gluster --mode=script volume start rally-pp0w0lle2msoqd >Result: volume start: rally-pp0w0lle2msoqd: success >[negroni] Started GET /queue/b72af76011ad02421cc06aae03268601 >[negroni] Completed 200 OK in 102.758µs >[heketi] INFO 2018/08/23 15:11:29 Create Volume succeeded >[asynchttp] INFO 2018/08/23 15:11:29 asynchttp.go:292: Completed job b72af76011ad02421cc06aae03268601 in 8.144678569s >[negroni] Started GET /queue/75c5311972934012e14a54e9c76538f5 >[negroni] Completed 200 OK in 124.49µs >[kubeexec] DEBUG 2018/08/23 15:11:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: gluster --mode=script volume create rally-h6rrrujhjr1bp5 10.70.46.10:/var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_9bb81abeb00ec51c39ad767c63f718f6/brick >Result: volume create: rally-h6rrrujhjr1bp5: success: please start the volume to access data >[kubeexec] DEBUG 2018/08/23 15:11:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: gluster --mode=script volume start rally-h6rrrujhjr1bp5 >Result: volume start: rally-h6rrrujhjr1bp5: success >[heketi] INFO 2018/08/23 15:11:30 Create Volume succeeded >[asynchttp] INFO 2018/08/23 15:11:30 asynchttp.go:292: Completed job 75c5311972934012e14a54e9c76538f5 in 9.044720465s >[negroni] Started GET /queue/b72af76011ad02421cc06aae03268601 >[negroni] Completed 303 See Other in 225.379µs >[negroni] Started GET /volumes/cfca9c0de6938b06ef4528a12de74201 >[negroni] Completed 200 OK in 6.166851ms >[negroni] Started POST /volumes/cfca9c0de6938b06ef4528a12de74201/expand >[heketi] INFO 2018/08/23 15:11:31 Allocating brick set #0 >[negroni] Started GET /queue/75c5311972934012e14a54e9c76538f5 >[negroni] Completed 303 See Other in 128.89µs >[negroni] Started GET /volumes/f7e8d452a31fe300d75499749fed9a2b >[negroni] Completed 200 OK in 531.065µs >[negroni] Started POST /volumes/f7e8d452a31fe300d75499749fed9a2b/expand >[negroni] Completed 202 Accepted in 150.293224ms >[heketi] INFO 2018/08/23 15:11:31 Allocating brick set #0 >[asynchttp] INFO 2018/08/23 15:11:31 asynchttp.go:288: Started job 52f8e65b5d8553f56986e2881af459f9 >[heketi] INFO 2018/08/23 15:11:31 Started async operation: Expand Volume >[heketi] INFO 2018/08/23 15:11:31 Creating brick da0c1db1bf4f2e1d97086d5c353567a3 >[negroni] Started GET /queue/52f8e65b5d8553f56986e2881af459f9 >[negroni] Completed 200 OK in 134.706µs >[kubeexec] DEBUG 2018/08/23 15:11:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mkdir -p /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_da0c1db1bf4f2e1d97086d5c353567a3 >Result: >[negroni] Completed 202 Accepted in 208.150273ms >[asynchttp] INFO 2018/08/23 15:11:31 asynchttp.go:288: Started job 3a9a33dc2b409f05142def1f38ac4401 >[heketi] INFO 2018/08/23 15:11:31 Started async operation: Expand Volume >[heketi] INFO 2018/08/23 15:11:31 Creating brick 1fe6af53195f6ab434b249a6147c6c00 >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 94.516µs >[kubeexec] DEBUG 2018/08/23 15:11:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_80280148c66c9e91e0c27f64a751900f/tp_cca9b20cf7150b9d6c6ffb0cf21ed67a --virtualsize 1048576K --name brick_da0c1db1bf4f2e1d97086d5c353567a3 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_da0c1db1bf4f2e1d97086d5c353567a3" created. >[negroni] Started GET /queue/52f8e65b5d8553f56986e2881af459f9 >[negroni] Completed 200 OK in 132.164µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 186.709µs >[kubeexec] DEBUG 2018/08/23 15:11:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_80280148c66c9e91e0c27f64a751900f-brick_da0c1db1bf4f2e1d97086d5c353567a3 >Result: meta-data=/dev/mapper/vg_80280148c66c9e91e0c27f64a751900f-brick_da0c1db1bf4f2e1d97086d5c353567a3 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/08/23 15:11:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: awk "BEGIN {print \"/dev/mapper/vg_80280148c66c9e91e0c27f64a751900f-brick_da0c1db1bf4f2e1d97086d5c353567a3 /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_da0c1db1bf4f2e1d97086d5c353567a3 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/08/23 15:11:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_80280148c66c9e91e0c27f64a751900f-brick_da0c1db1bf4f2e1d97086d5c353567a3 /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_da0c1db1bf4f2e1d97086d5c353567a3 >Result: >[kubeexec] DEBUG 2018/08/23 15:11:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mkdir /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_da0c1db1bf4f2e1d97086d5c353567a3/brick >Result: >[kubeexec] DEBUG 2018/08/23 15:11:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mkdir -p /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_1fe6af53195f6ab434b249a6147c6c00 >Result: >[negroni] Started GET /queue/52f8e65b5d8553f56986e2881af459f9 >[negroni] Completed 200 OK in 180.269µs >[kubeexec] DEBUG 2018/08/23 15:11:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_80280148c66c9e91e0c27f64a751900f/tp_f4ec8a1b1576bb2b0826c5b7b555ac52 --virtualsize 1048576K --name brick_1fe6af53195f6ab434b249a6147c6c00 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_1fe6af53195f6ab434b249a6147c6c00" created. >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 181.856µs >[negroni] Started GET /queue/52f8e65b5d8553f56986e2881af459f9 >[negroni] Completed 200 OK in 192.945µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 202.009µs >[kubeexec] DEBUG 2018/08/23 15:11:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_80280148c66c9e91e0c27f64a751900f-brick_1fe6af53195f6ab434b249a6147c6c00 >Result: meta-data=/dev/mapper/vg_80280148c66c9e91e0c27f64a751900f-brick_1fe6af53195f6ab434b249a6147c6c00 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/08/23 15:11:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: awk "BEGIN {print \"/dev/mapper/vg_80280148c66c9e91e0c27f64a751900f-brick_1fe6af53195f6ab434b249a6147c6c00 /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_1fe6af53195f6ab434b249a6147c6c00 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/08/23 15:11:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_80280148c66c9e91e0c27f64a751900f-brick_1fe6af53195f6ab434b249a6147c6c00 /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_1fe6af53195f6ab434b249a6147c6c00 >Result: >[kubeexec] DEBUG 2018/08/23 15:11:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mkdir /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_1fe6af53195f6ab434b249a6147c6c00/brick >Result: >[kubeexec] DEBUG 2018/08/23 15:11:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: gluster --mode=script volume add-brick rally-pp0w0lle2msoqd 10.70.46.26:/var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_da0c1db1bf4f2e1d97086d5c353567a3/brick >Result: volume add-brick: success >[negroni] Started GET /queue/52f8e65b5d8553f56986e2881af459f9 >[negroni] Completed 200 OK in 177.272µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 130.42µs >[kubeexec] DEBUG 2018/08/23 15:11:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: gluster --mode=script volume add-brick rally-h6rrrujhjr1bp5 10.70.46.26:/var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_1fe6af53195f6ab434b249a6147c6c00/brick >Result: volume add-brick: success >[negroni] Started GET /queue/52f8e65b5d8553f56986e2881af459f9 >[negroni] Completed 200 OK in 175.936µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 113.759µs >[negroni] Started GET /queue/52f8e65b5d8553f56986e2881af459f9 >[negroni] Completed 200 OK in 183.602µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 216.279µs >[negroni] Started GET /queue/52f8e65b5d8553f56986e2881af459f9 >[negroni] Completed 200 OK in 183.579µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 138.812µs >[negroni] Started GET /queue/52f8e65b5d8553f56986e2881af459f9 >[negroni] Completed 200 OK in 157.527µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 198.509µs >[negroni] Started GET /queue/52f8e65b5d8553f56986e2881af459f9 >[negroni] Completed 200 OK in 141.575µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 200.715µs >[kubeexec] DEBUG 2018/08/23 15:11:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: gluster --mode=script volume rebalance rally-pp0w0lle2msoqd start >Result: volume rebalance: rally-pp0w0lle2msoqd: success: Rebalance on rally-pp0w0lle2msoqd has been started successfully. Use rebalance status command to check status of the rebalance process. >ID: 80eb5802-5566-4319-82e0-8bdcef72b311 >[heketi] INFO 2018/08/23 15:11:50 Expand Volume succeeded >[asynchttp] INFO 2018/08/23 15:11:50 asynchttp.go:292: Completed job 52f8e65b5d8553f56986e2881af459f9 in 19.073330852s >[negroni] Started GET /queue/52f8e65b5d8553f56986e2881af459f9 >[negroni] Completed 303 See Other in 202.202µs >[negroni] Started GET /volumes/cfca9c0de6938b06ef4528a12de74201 >[negroni] Completed 200 OK in 7.628131ms >[negroni] Started DELETE /volumes/cfca9c0de6938b06ef4528a12de74201 >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 146.12µs >[negroni] Completed 202 Accepted in 164.395095ms >[asynchttp] INFO 2018/08/23 15:11:51 asynchttp.go:288: Started job 837cd68f1e0228c88dbc78e2e0aabbda >[heketi] INFO 2018/08/23 15:11:51 Started async operation: Delete Volume >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 140.219µs >[kubeexec] DEBUG 2018/08/23 15:11:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: gluster --mode=script snapshot list rally-pp0w0lle2msoqd --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 222.219µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 115.966µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 225.285µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 122.503µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 183.04µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 236.736µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 161.558µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 179.821µs >[kubeexec] DEBUG 2018/08/23 15:12:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: gluster --mode=script volume stop rally-pp0w0lle2msoqd force >Result: volume stop: rally-pp0w0lle2msoqd: success >[kubeexec] DEBUG 2018/08/23 15:12:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: gluster --mode=script volume delete rally-pp0w0lle2msoqd >Result: volume delete: rally-pp0w0lle2msoqd: success >[heketi] INFO 2018/08/23 15:12:01 Deleting brick 78962de2b5e56f6867e21f2608906bd4 >[heketi] INFO 2018/08/23 15:12:01 Deleting brick da0c1db1bf4f2e1d97086d5c353567a3 >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 183.383µs >[kubeexec] DEBUG 2018/08/23 15:12:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: umount /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_78962de2b5e56f6867e21f2608906bd4 >Result: >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 173.616µs >[kubeexec] DEBUG 2018/08/23 15:12:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: sed -i.save "/brick_78962de2b5e56f6867e21f2608906bd4/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/08/23 15:12:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: lvremove --autobackup=n -f vg_6c0538bc7bed0679f0f595c61c72656f/brick_78962de2b5e56f6867e21f2608906bd4 >Result: Logical volume "brick_78962de2b5e56f6867e21f2608906bd4" successfully removed >[kubeexec] DEBUG 2018/08/23 15:12:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: lvs --noheadings --options=thin_count vg_6c0538bc7bed0679f0f595c61c72656f/tp_bf8809b355339e7f39804e80780ec6fc >Result: 0 >[kubeexec] DEBUG 2018/08/23 15:12:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: lvremove --autobackup=n -f vg_6c0538bc7bed0679f0f595c61c72656f/tp_bf8809b355339e7f39804e80780ec6fc >Result: Logical volume "tp_bf8809b355339e7f39804e80780ec6fc" successfully removed >[kubeexec] DEBUG 2018/08/23 15:12:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: rmdir /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_78962de2b5e56f6867e21f2608906bd4 >Result: >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 195.815µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 188.245µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 203.535µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 204.066µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 188.856µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 196.663µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 197.096µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 201.283µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 188.932µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 130.764µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 210.319µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 186.359µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 196.612µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 190.186µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 127.458µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 138.691µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 151.579µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 191.652µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 144.302µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 251.322µs >[heketi] INFO 2018/08/23 15:12:21 Starting Node Health Status refresh >[cmdexec] INFO 2018/08/23 15:12:21 Check Glusterd service status in node vp-ansible-v311-app-cns-0 >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 114.001µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 184.236µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 183.155µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 204.102µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 116.212µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 121.186µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 183.716µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 206.519µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 182.325µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 182.539µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 119.02µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 182.212µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 201.355µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 182.382µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 190.313µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 175.863µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 119.669µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 114.167µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 188.819µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 183.103µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 187.719µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 178.139µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 181.723µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 183.836µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 182.959µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 248.499µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 184.645µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 173.956µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 173.032µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 176.923µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 187.139µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 182.496µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 195.723µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 180.996µs >[negroni] Started DELETE /volumes/7c63529f0b15f298ebf42de88f10bc57 >[negroni] Completed 202 Accepted in 96.809837ms >[asynchttp] INFO 2018/08/23 15:12:57 asynchttp.go:288: Started job 2a2c007468c1e6efc1856b799642d9d5 >[heketi] INFO 2018/08/23 15:12:57 Started async operation: Delete Volume >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 78.353µs >[kubeexec] DEBUG 2018/08/23 15:12:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: gluster --mode=script snapshot list vol_7c63529f0b15f298ebf42de88f10bc57 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_7c63529f0b15f298ebf42de88f10bc57) does not exist</opErrstr> ></cliOutput> >[heketi] WARNING 2018/08/23 15:12:57 not attempting to delete missing volume vol_7c63529f0b15f298ebf42de88f10bc57 >[heketi] INFO 2018/08/23 15:12:57 Deleting brick 64da05c4fddbb74eef04191bd51aa2e5 >[heketi] INFO 2018/08/23 15:12:57 Deleting brick a4e2a2ac1c2aa4a5940e31335c15646e >[heketi] INFO 2018/08/23 15:12:57 Deleting brick c862180c0bcd8b0ae95ef4a908b722dd >[kubeexec] ERROR 2018/08/23 15:12:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [umount /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5] on glusterfs-cns-hzqg6: Err[command terminated with exit code 32]: Stdout []: Stderr [umount: /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5: mountpoint not found >] >[cmdexec] ERROR 2018/08/23 15:12:57 /src/github.com/heketi/heketi/executors/cmdexec/brick.go:198: Unable to execute command on glusterfs-cns-hzqg6: umount: /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5: mountpoint not found >[kubeexec] ERROR 2018/08/23 15:12:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [umount /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e] on glusterfs-cns-qrfrz: Err[command terminated with exit code 32]: Stdout []: Stderr [umount: /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >] >[cmdexec] ERROR 2018/08/23 15:12:57 /src/github.com/heketi/heketi/executors/cmdexec/brick.go:198: Unable to execute command on glusterfs-cns-qrfrz: umount: /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[kubeexec] DEBUG 2018/08/23 15:12:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: mount >Result: overlay on / type overlay (rw,relatime,seclabel,lowerdir=/var/lib/docker/overlay2/l/U5XO5J2DCZNGUCP5R4MKXSTQNB:/var/lib/docker/overlay2/l/GSPEIWWX6AE2HLHRTHSAXNJ5UV:/var/lib/docker/overlay2/l/XQMHTRWR5Z7RQOSAAZPG5PYFQO:/var/lib/docker/overlay2/l/MBXYP3T6L66XZYLRZW54453HMT,upperdir=/var/lib/docker/overlay2/bd14b4ee8c93cd74e40735e7740631990e59549305b2ad4fbad99c9edabc33e0/diff,workdir=/var/lib/docker/overlay2/bd14b4ee8c93cd74e40735e7740631990e59549305b2ad4fbad99c9edabc33e0/work) >proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) >sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) >/dev/sdc on /run type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=16378916k,nr_inodes=4094729,mode=755) >tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel) >devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000) >hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel) >mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel) >/dev/mapper/docker--vol-dockerlv on /etc/resolv.conf type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >/dev/sdc on /dev/termination-log type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/glusterfs type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/target type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >tmpfs on /run/lvm type tmpfs (rw,nosuid,nodev,seclabel,mode=755) >shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,context="system_u:object_r:container_file_t:s0:c371,c614",size=65536k) >/dev/sdc on /etc/hosts type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/ssl type xfs (ro,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/docker--vol-dockerlv on /etc/hostname type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >/dev/mapper/docker--vol-dockerlv on /run/secrets type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/glusterd type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/log/glusterfs type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755) >cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) >cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices) >cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer) >cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuacct,cpu) >cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb) >cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids) >cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_prio,net_cls) >cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio) >cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event) >cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory) >cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset) >/dev/mapper/rhel_dhcp46--210-root on /usr/lib/modules type xfs (ro,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/misc/glusterfsd type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >tmpfs on /run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime,seclabel) >systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=24,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=84403) >debugfs on /sys/kernel/debug type debugfs (rw,relatime) >configfs on /sys/kernel/config type configfs (rw,relatime) >sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) >/dev/mapper/vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250 on /var/lib/heketi/mounts/vg_b2c811c50490ee9832f1e0ecfb15f660/brick_7251b95b366cb335bf154a71d67f1250 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >/dev/mapper/vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa on /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_88fc94292cf739f1608b4abb3a42a5aa type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >[cmdexec] WARNING 2018/08/23 15:12:57 brick path [/var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5] not mounted, assuming deleted >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 96.306µs >[kubeexec] DEBUG 2018/08/23 15:12:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: mount >Result: overlay on / type overlay (rw,relatime,seclabel,lowerdir=/var/lib/docker/overlay2/l/SXPAWETXSHQGNMVRHZJO4FBZVL:/var/lib/docker/overlay2/l/TZBWO4EQPSJF2XM7CWSUWH4MY2:/var/lib/docker/overlay2/l/BQTYMQYGPAKFJ7JDSYFKDB5KRA:/var/lib/docker/overlay2/l/T5YE2NA4OL27CYLFP3AG22I554,upperdir=/var/lib/docker/overlay2/8d56ecec848c4867bd68ea143df91805c04ff4a6fa66c805eff322c933ac00e0/diff,workdir=/var/lib/docker/overlay2/8d56ecec848c4867bd68ea143df91805c04ff4a6fa66c805eff322c933ac00e0/work) >proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) >sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) >devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=16378916k,nr_inodes=4094729,mode=755) >tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel) >devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000) >hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel) >mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel) >/dev/sdc on /run type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >/dev/mapper/docker--vol-dockerlv on /etc/resolv.conf type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >tmpfs on /run/lvm type tmpfs (rw,nosuid,nodev,seclabel,mode=755) >/dev/mapper/rhel_dhcp46--210-root on /etc/target type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/glusterfs type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/sdc on /dev/termination-log type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,context="system_u:object_r:container_file_t:s0:c542,c552",size=65536k) >/dev/sdc on /etc/hosts type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/ssl type xfs (ro,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/docker--vol-dockerlv on /etc/hostname type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >/dev/mapper/docker--vol-dockerlv on /run/secrets type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755) >cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) >cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuacct,cpu) >cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory) >cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset) >cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_prio,net_cls) >cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids) >cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio) >cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices) >cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer) >cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb) >cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/glusterd type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/log/glusterfs type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /usr/lib/modules type xfs (ro,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/misc/glusterfsd type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >tmpfs on /run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime,seclabel) >systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=24,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=84609) >configfs on /sys/kernel/config type configfs (rw,relatime) >debugfs on /sys/kernel/debug type debugfs (rw,relatime) >sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) >/dev/mapper/vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0 on /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_55d03facb36899f57cdabd107689eba0 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >/dev/mapper/vg_557622450e27d9663fd36087fbe35bee-brick_a4e2a2ac1c2aa4a5940e31335c15646e on /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >/dev/mapper/vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974 on /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_3dc24288734a047a8fcc00ab2c7a0974 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >/dev/mapper/vg_6c0538bc7bed0679f0f595c61c72656f-brick_9bb81abeb00ec51c39ad767c63f718f6 on /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_9bb81abeb00ec51c39ad767c63f718f6 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >[kubeexec] DEBUG 2018/08/23 15:12:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: sed -i.save "/brick_64da05c4fddbb74eef04191bd51aa2e5/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 105.054µs >[heketi] ERROR 2018/08/23 15:12:58 /src/github.com/heketi/heketi/apps/glusterfs/brick_create.go:60: error destroying brick a4e2a2ac1c2aa4a5940e31335c15646e: Unable to execute command on glusterfs-cns-qrfrz: umount: /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[kubeexec] DEBUG 2018/08/23 15:12:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: sed -i.save "/brick_a4e2a2ac1c2aa4a5940e31335c15646e/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 134.953µs >[cmdexec] WARNING 2018/08/23 15:12:58 did not delete missing lv: vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5 >[kubeexec] ERROR 2018/08/23 15:12:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [lvremove --autobackup=n -f vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5] on glusterfs-cns-hzqg6: Err[command terminated with exit code 5]: Stdout []: Stderr [ Failed to find logical volume "vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5" >] >[cmdexec] WARNING 2018/08/23 15:12:58 unable to count lvs in missing thin pool: vg_91c1336b9d8010eb5c52368eef886671/tp_64da05c4fddbb74eef04191bd51aa2e5 >[kubeexec] ERROR 2018/08/23 15:12:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [lvs --noheadings --options=thin_count vg_91c1336b9d8010eb5c52368eef886671/tp_64da05c4fddbb74eef04191bd51aa2e5] on glusterfs-cns-hzqg6: Err[command terminated with exit code 5]: Stdout []: Stderr [ Failed to find logical volume "vg_91c1336b9d8010eb5c52368eef886671/tp_64da05c4fddbb74eef04191bd51aa2e5" >] >[cmdexec] WARNING 2018/08/23 15:12:59 did not delete missing thin pool: vg_91c1336b9d8010eb5c52368eef886671/tp_64da05c4fddbb74eef04191bd51aa2e5 >[kubeexec] ERROR 2018/08/23 15:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [lvremove --autobackup=n -f vg_91c1336b9d8010eb5c52368eef886671/tp_64da05c4fddbb74eef04191bd51aa2e5] on glusterfs-cns-hzqg6: Err[command terminated with exit code 5]: Stdout []: Stderr [ Failed to find logical volume "vg_91c1336b9d8010eb5c52368eef886671/tp_64da05c4fddbb74eef04191bd51aa2e5" >] >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 85.521µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 131.899µs >[kubeexec] ERROR 2018/08/23 15:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [rmdir /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5] on glusterfs-cns-hzqg6: Err[command terminated with exit code 1]: Stdout []: Stderr [rmdir: failed to remove '/var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5': No such file or directory >] >[cmdexec] ERROR 2018/08/23 15:12:59 /src/github.com/heketi/heketi/executors/cmdexec/brick.go:279: Unable to execute command on glusterfs-cns-hzqg6: rmdir: failed to remove '/var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_64da05c4fddbb74eef04191bd51aa2e5': No such file or directory >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 133.259µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 148.449µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 112.668µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 125.295µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 184.423µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 161.949µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 184.746µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 186.745µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 183.596µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 165.659µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 153.102µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 175.412µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 177.633µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 171.706µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 127.958µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 185.879µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 183.989µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 178.849µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 152.11µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 186.099µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 180.343µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 143.352µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 143.32µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 119.371µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 181.872µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 100.028µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 154.445µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 180.709µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 123.63µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 127.263µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 149.752µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 111.988µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 108.81µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 109.756µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 102.276µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 154.647µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 133.141µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 102.968µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 164.482µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 180.514µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 178.477µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 101.53µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 154.012µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 182.885µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 178.126µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 160.159µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 149.853µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 184.903µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 193.855µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 158.959µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 131.474µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 127.051µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 191.422µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 160.512µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 154.52µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 153.239µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 201.556µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 102.857µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 158.469µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 183.309µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 175.703µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 153.196µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 164.053µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 117.751µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 174.023µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 145.589µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 144.516µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 182.993µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 166.095µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 124.559µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 151.592µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 192.423µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 185.586µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 164.052µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 100.004µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 115.826µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 164.049µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 180.813µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 242.659µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 199.862µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 198.639µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 113.623µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 160.209µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 200.043µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 181.542µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 183.989µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 153.119µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 182.579µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 181.716µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 170.309µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 158.413µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 201.349µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 186.396µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 162.759µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 161.225µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 206.399µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 140.027µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 164.525µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 151.759µs >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 200 OK in 185.652µs >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 194.662µs >[kubeexec] ERROR 2018/08/23 15:13:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume rebalance rally-h6rrrujhjr1bp5 start] on glusterfs-cns-jlgq2: Err[command terminated with exit code 1]: Stdout [Error : Request timed out >]: Stderr [] >[cmdexec] ERROR 2018/08/23 15:13:50 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:124: Unable to start rebalance on the volume &{[{/var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_1fe6af53195f6ab434b249a6147c6c00/brick 10.70.46.26}] rally-h6rrrujhjr1bp5 0 [] 0 0 1 false}: Unable to execute command on glusterfs-cns-jlgq2: >[cmdexec] ERROR 2018/08/23 15:13:50 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:125: Action Required: run rebalance manually on the volume &{[{/var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_1fe6af53195f6ab434b249a6147c6c00/brick 10.70.46.26}] rally-h6rrrujhjr1bp5 0 [] 0 0 1 false} >[heketi] INFO 2018/08/23 15:13:50 Expand Volume succeeded >[asynchttp] INFO 2018/08/23 15:13:50 asynchttp.go:292: Completed job 3a9a33dc2b409f05142def1f38ac4401 in 2m19.200280993s >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 106.873µs >[kubeexec] DEBUG 2018/08/23 15:13:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: umount /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_da0c1db1bf4f2e1d97086d5c353567a3 >Result: >[kubeexec] DEBUG 2018/08/23 15:13:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:03 UTC; 1 day 5h ago > Process: 429 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 430 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod630cf184_a5f1_11e8_9a3f_005056a549ca.slice/docker-24ed69a095449460f61deded6aaa6a7de0753c88a3cf0a5e07b85b9cd4a92805.scope/system.slice/glusterd.service > ââ 430 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 852 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id heketidbstorage.10.70.46.26.var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.26-var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.pid -S /var/run/gluster/6a2a03eab842b37b7169d67e7d04d6c2.socket --brick-name /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_cdc9f126aeab5fc4af31b482980e213e/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 6116 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id rally-pp0w0lle2msoqd.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_da0c1db1bf4f2e1d97086d5c353567a3-brick -p /var/run/gluster/vols/rally-pp0w0lle2msoqd/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_da0c1db1bf4f2e1d97086d5c353567a3-brick.pid -S /var/run/gluster/1cfd7f9c5cce7da28a62fa0b83f0bfdf.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_da0c1db1bf4f2e1d97086d5c353567a3/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_da0c1db1bf4f2e1d97086d5c353567a3-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49154 --xlator-option rally-pp0w0lle2msoqd-server.listen-port=49154 > ââ29575 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.pid -S /var/run/gluster/4d30a5ebbf6049f7294f2c96c0889f0e.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_3510a38a8d5d693a612baa22d916237b/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31589 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/13b5dab3bb9b7241888ca2821e42e2c0.socket --xlator-option *replicate*.node-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f >[heketi] INFO 2018/08/23 15:13:50 Periodic health check status: node 064f1f469119e5c69a56a8c81b5fd96a up=true >[cmdexec] INFO 2018/08/23 15:13:50 Check Glusterd service status in node vp-ansible-v311-app-cns-1 >[kubeexec] DEBUG 2018/08/23 15:13:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:55:53 UTC; 1 day 5h ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod631c8e73_a5f1_11e8_9a3f_005056a549ca.slice/docker-0c00732c129d13630118d564e7628c6b1a7329d974aa476e520ae3be5990a263.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 856 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id heketidbstorage.10.70.46.10.var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.10-var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.pid -S /var/run/gluster/eb9a06fb398d5e1d9578e0fc2dc82d85.socket --brick-name /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_55d03facb36899f57cdabd107689eba0/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 5775 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id rally-pp0w0lle2msoqd.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_78962de2b5e56f6867e21f2608906bd4-brick -p /var/run/gluster/vols/rally-pp0w0lle2msoqd/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_78962de2b5e56f6867e21f2608906bd4-brick.pid -S /var/run/gluster/68e8e64015e7e6c91d0690c657659cbc.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_78962de2b5e56f6867e21f2608906bd4/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_78962de2b5e56f6867e21f2608906bd4-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49154 --xlator-option rally-pp0w0lle2msoqd-server.listen-port=49154 > ââ29683 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.pid -S /var/run/gluster/43582dc96bce5442618f73c71cf50bbd.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_3dc24288734a047a8fcc00ab2c7a0974/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31513 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/97a465235ca3ff589c72ec017f4ab551.socket --xlator-option *replicate*.node-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 >[heketi] INFO 2018/08/23 15:13:50 Periodic health check status: node 284f3e3a4c2fd7c0e78b2e759afa9847 up=true >[cmdexec] INFO 2018/08/23 15:13:50 Check Glusterd service status in node vp-ansible-v311-app-cns-2 >[kubeexec] ERROR 2018/08/23 15:13:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [umount /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd] on glusterfs-cns-jlgq2: Err[command terminated with exit code 32]: Stdout []: Stderr [umount: /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd: mountpoint not found >] >[cmdexec] ERROR 2018/08/23 15:13:50 /src/github.com/heketi/heketi/executors/cmdexec/brick.go:198: Unable to execute command on glusterfs-cns-jlgq2: umount: /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd: mountpoint not found >[kubeexec] DEBUG 2018/08/23 15:13:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:11 UTC; 1 day 5h ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6310af79_a5f1_11e8_9a3f_005056a549ca.slice/docker-55cc0a963be6c53dd856f9025694fb7897b40fb9112aca2b194ddc65ec54d76f.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 842 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id heketidbstorage.10.70.47.176.var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.176-var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.pid -S /var/run/gluster/fb2b61aa838bbe43a440ffc2986e9bc7.socket --brick-name /var/lib/heketi/mounts/vg_b2c811c50490ee9832f1e0ecfb15f660/brick_7251b95b366cb335bf154a71d67f1250/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ29609 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.pid -S /var/run/gluster/a8eee47923e596454bd4f1c2e122327b.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_88fc94292cf739f1608b4abb3a42a5aa/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31591 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/645e4eea62d3f97263e806a381eeabda.socket --xlator-option *replicate*.node-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d >[heketi] INFO 2018/08/23 15:13:50 Periodic health check status: node f48c1fefdee6989a05560260afcd0a2d up=true >[heketi] INFO 2018/08/23 15:13:50 Cleaned 0 nodes from health cache >[kubeexec] DEBUG 2018/08/23 15:13:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: sed -i.save "/brick_da0c1db1bf4f2e1d97086d5c353567a3/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/08/23 15:13:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mount >Result: overlay on / type overlay (rw,relatime,seclabel,lowerdir=/var/lib/docker/overlay2/l/OFUIYSJFBAYVM6K7KF5CGMCW3V:/var/lib/docker/overlay2/l/ITGEMGBNMOKYHHLYP2GVUYA5WO:/var/lib/docker/overlay2/l/LIUQEN2SQMRD6C2F2SVINNX6XJ:/var/lib/docker/overlay2/l/GIONCUIDKEIFJRKX7TAKGOKFUH,upperdir=/var/lib/docker/overlay2/a3dd718e2080efe6c1e8d7e459eeced151ca909d3e63720a60b68e24345428cc/diff,workdir=/var/lib/docker/overlay2/a3dd718e2080efe6c1e8d7e459eeced151ca909d3e63720a60b68e24345428cc/work) >proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) >sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) >/dev/sdc on /run type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=16378916k,nr_inodes=4094729,mode=755) >tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel) >devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000) >mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel) >hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel) >/dev/mapper/docker--vol-dockerlv on /etc/resolv.conf type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/target type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/ssl type xfs (ro,relatime,seclabel,attr2,inode64,noquota) >/dev/sdc on /dev/termination-log type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >/dev/mapper/rhel_dhcp46--210-root on /etc/glusterfs type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,context="system_u:object_r:container_file_t:s0:c304,c449",size=65536k) >/dev/sdc on /etc/hosts type xfs (rw,relatime,seclabel,attr2,inode64,grpquota) >tmpfs on /run/lvm type tmpfs (rw,nosuid,nodev,seclabel,mode=755) >/dev/mapper/docker--vol-dockerlv on /etc/hostname type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >/dev/mapper/docker--vol-dockerlv on /run/secrets type xfs (rw,relatime,seclabel,attr2,inode64,prjquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/glusterd type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755) >cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) >cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids) >cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_prio,net_cls) >cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices) >cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset) >cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer) >cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuacct,cpu) >cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory) >cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event) >cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb) >cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio) >/dev/mapper/rhel_dhcp46--210-root on /var/log/glusterfs type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /usr/lib/modules type xfs (ro,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >/dev/mapper/rhel_dhcp46--210-root on /var/lib/misc/glusterfsd type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >tmpfs on /run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime,seclabel) >systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=23,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=87226) >debugfs on /sys/kernel/debug type debugfs (rw,relatime) >configfs on /sys/kernel/config type configfs (rw,relatime) >sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) >/dev/mapper/vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e on /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_cdc9f126aeab5fc4af31b482980e213e type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >/dev/mapper/vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b on /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_3510a38a8d5d693a612baa22d916237b type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >/dev/mapper/vg_80280148c66c9e91e0c27f64a751900f-brick_1fe6af53195f6ab434b249a6147c6c00 on /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_1fe6af53195f6ab434b249a6147c6c00 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) >[cmdexec] WARNING 2018/08/23 15:13:51 brick path [/var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd] not mounted, assuming deleted >[negroni] Started GET /blockvolumes >[negroni] Completed 200 OK in 4.161509ms >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 3.193647ms >[negroni] Started GET /volumes/7c63529f0b15f298ebf42de88f10bc57 >[negroni] Completed 200 OK in 3.223002ms >[negroni] Started GET /volumes/88eb861a7ad3f268c2d092be1287cef6 >[negroni] Completed 200 OK in 596.635µs >[negroni] Started GET /volumes/a833a9314f4557589a9d874105357140 >[negroni] Completed 200 OK in 1.273207ms >[negroni] Started GET /volumes/cfca9c0de6938b06ef4528a12de74201 >[negroni] Completed 200 OK in 557.342µs >[negroni] Started GET /volumes/f7e8d452a31fe300d75499749fed9a2b >[negroni] Completed 200 OK in 1.326322ms >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 149.4µs >[kubeexec] DEBUG 2018/08/23 15:13:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: lvremove --autobackup=n -f vg_80280148c66c9e91e0c27f64a751900f/brick_da0c1db1bf4f2e1d97086d5c353567a3 >Result: Logical volume "brick_da0c1db1bf4f2e1d97086d5c353567a3" successfully removed >[negroni] Started GET /queue/3a9a33dc2b409f05142def1f38ac4401 >[negroni] Completed 303 See Other in 184.522µs >[negroni] Started GET /volumes/f7e8d452a31fe300d75499749fed9a2b >[negroni] Completed 200 OK in 924.259µs >[kubeexec] DEBUG 2018/08/23 15:13:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: sed -i.save "/brick_c862180c0bcd8b0ae95ef4a908b722dd/d" /var/lib/heketi/fstab >Result: >[negroni] Started DELETE /volumes/f7e8d452a31fe300d75499749fed9a2b >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 192.646µs >[negroni] Completed 202 Accepted in 154.24069ms >[asynchttp] INFO 2018/08/23 15:13:52 asynchttp.go:288: Started job 75b58fe5e1a2e0bcc364e43734bae877 >[heketi] INFO 2018/08/23 15:13:52 Started async operation: Delete Volume >[negroni] Started GET /queue/75b58fe5e1a2e0bcc364e43734bae877 >[negroni] Completed 200 OK in 170.789µs >[kubeexec] DEBUG 2018/08/23 15:13:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: lvs --noheadings --options=thin_count vg_80280148c66c9e91e0c27f64a751900f/tp_cca9b20cf7150b9d6c6ffb0cf21ed67a >Result: 0 >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 81.261µs >[cmdexec] WARNING 2018/08/23 15:13:52 did not delete missing lv: vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd >[kubeexec] ERROR 2018/08/23 15:13:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [lvremove --autobackup=n -f vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd] on glusterfs-cns-jlgq2: Err[command terminated with exit code 5]: Stdout []: Stderr [ Failed to find logical volume "vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd" >] >[kubeexec] DEBUG 2018/08/23 15:13:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: gluster --mode=script snapshot list rally-h6rrrujhjr1bp5 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 126.063µs >[kubeexec] DEBUG 2018/08/23 15:13:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: lvremove --autobackup=n -f vg_80280148c66c9e91e0c27f64a751900f/tp_cca9b20cf7150b9d6c6ffb0cf21ed67a >Result: Logical volume "tp_cca9b20cf7150b9d6c6ffb0cf21ed67a" successfully removed >[cmdexec] WARNING 2018/08/23 15:13:54 unable to count lvs in missing thin pool: vg_b1ebb7cf4e45c57d379df092a591ec0a/tp_c862180c0bcd8b0ae95ef4a908b722dd >[kubeexec] ERROR 2018/08/23 15:13:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [lvs --noheadings --options=thin_count vg_b1ebb7cf4e45c57d379df092a591ec0a/tp_c862180c0bcd8b0ae95ef4a908b722dd] on glusterfs-cns-jlgq2: Err[command terminated with exit code 5]: Stdout []: Stderr [ Failed to find logical volume "vg_b1ebb7cf4e45c57d379df092a591ec0a/tp_c862180c0bcd8b0ae95ef4a908b722dd" >] >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 200 OK in 203.603µs >[negroni] Started GET /queue/75b58fe5e1a2e0bcc364e43734bae877 >[negroni] Completed 200 OK in 182.639µs >[kubeexec] ERROR 2018/08/23 15:13:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop rally-h6rrrujhjr1bp5 force] on glusterfs-cns-jlgq2: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: rally-h6rrrujhjr1bp5: failed: Another transaction is in progress for rally-h6rrrujhjr1bp5. Please try again after sometime. >] >[cmdexec] ERROR 2018/08/23 15:13:54 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:145: Unable to stop volume rally-h6rrrujhjr1bp5: Unable to execute command on glusterfs-cns-jlgq2: volume stop: rally-h6rrrujhjr1bp5: failed: Another transaction is in progress for rally-h6rrrujhjr1bp5. Please try again after sometime. >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 176.306µs >[kubeexec] DEBUG 2018/08/23 15:13:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: rmdir /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_da0c1db1bf4f2e1d97086d5c353567a3 >Result: >[heketi] INFO 2018/08/23 15:13:54 Delete Volume succeeded >[asynchttp] INFO 2018/08/23 15:13:54 asynchttp.go:292: Completed job 837cd68f1e0228c88dbc78e2e0aabbda in 2m3.380774329s >[negroni] Started GET /blockvolumes >[negroni] Completed 200 OK in 2.64848ms >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 3.47188ms >[negroni] Started GET /volumes/7c63529f0b15f298ebf42de88f10bc57 >[negroni] Completed 200 OK in 4.377534ms >[negroni] Started GET /volumes/88eb861a7ad3f268c2d092be1287cef6 >[negroni] Completed 200 OK in 579.534µs >[negroni] Started GET /volumes/a833a9314f4557589a9d874105357140 >[negroni] Completed 200 OK in 1.173919ms >[negroni] Started GET /volumes/f7e8d452a31fe300d75499749fed9a2b >[negroni] Completed 200 OK in 564.981µs >[cmdexec] WARNING 2018/08/23 15:13:55 did not delete missing thin pool: vg_b1ebb7cf4e45c57d379df092a591ec0a/tp_c862180c0bcd8b0ae95ef4a908b722dd >[kubeexec] ERROR 2018/08/23 15:13:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [lvremove --autobackup=n -f vg_b1ebb7cf4e45c57d379df092a591ec0a/tp_c862180c0bcd8b0ae95ef4a908b722dd] on glusterfs-cns-jlgq2: Err[command terminated with exit code 5]: Stdout []: Stderr [ Failed to find logical volume "vg_b1ebb7cf4e45c57d379df092a591ec0a/tp_c862180c0bcd8b0ae95ef4a908b722dd" >] >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 200 OK in 259.516µs >[kubeexec] ERROR 2018/08/23 15:13:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete rally-h6rrrujhjr1bp5] on glusterfs-cns-jlgq2: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: rally-h6rrrujhjr1bp5: failed: Another transaction is in progress for rally-h6rrrujhjr1bp5. Please try again after sometime. >] >[cmdexec] ERROR 2018/08/23 15:13:55 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:154: Unable to delete volume rally-h6rrrujhjr1bp5: Unable to execute command on glusterfs-cns-jlgq2: volume delete: rally-h6rrrujhjr1bp5: failed: Another transaction is in progress for rally-h6rrrujhjr1bp5. Please try again after sometime. >[heketi] ERROR 2018/08/23 15:13:55 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:673: Unable to delete volume: Unable to delete volume rally-h6rrrujhjr1bp5: Unable to execute command on glusterfs-cns-jlgq2: volume delete: rally-h6rrrujhjr1bp5: failed: Another transaction is in progress for rally-h6rrrujhjr1bp5. Please try again after sometime. >[heketi] ERROR 2018/08/23 15:13:55 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:433: Error executing delete volume: Unable to delete volume rally-h6rrrujhjr1bp5: Unable to execute command on glusterfs-cns-jlgq2: volume delete: rally-h6rrrujhjr1bp5: failed: Another transaction is in progress for rally-h6rrrujhjr1bp5. Please try again after sometime. >[asynchttp] INFO 2018/08/23 15:13:55 asynchttp.go:292: Completed job 75b58fe5e1a2e0bcc364e43734bae877 in 3.600722418s >[heketi] ERROR 2018/08/23 15:13:55 /src/github.com/heketi/heketi/apps/glusterfs/operations_manage.go:113: Delete Volume Failed: Unable to delete volume rally-h6rrrujhjr1bp5: Unable to execute command on glusterfs-cns-jlgq2: volume delete: rally-h6rrrujhjr1bp5: failed: Another transaction is in progress for rally-h6rrrujhjr1bp5. Please try again after sometime. >[kubeexec] ERROR 2018/08/23 15:13:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [rmdir /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd] on glusterfs-cns-jlgq2: Err[command terminated with exit code 1]: Stdout []: Stderr [rmdir: failed to remove '/var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd': No such file or directory >] >[cmdexec] ERROR 2018/08/23 15:13:56 /src/github.com/heketi/heketi/executors/cmdexec/brick.go:279: Unable to execute command on glusterfs-cns-jlgq2: rmdir: failed to remove '/var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_c862180c0bcd8b0ae95ef4a908b722dd': No such file or directory >[heketi] ERROR 2018/08/23 15:13:56 /src/github.com/heketi/heketi/apps/glusterfs/brick_create.go:77: Unable to execute command on glusterfs-cns-qrfrz: umount: /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[heketi] ERROR 2018/08/23 15:13:56 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:683: Unable to delete bricks: Unable to execute command on glusterfs-cns-qrfrz: umount: /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[heketi] ERROR 2018/08/23 15:13:56 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:433: Error executing delete volume: Unable to execute command on glusterfs-cns-qrfrz: umount: /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[negroni] Started GET /queue/837cd68f1e0228c88dbc78e2e0aabbda >[negroni] Completed 204 No Content in 172.189µs >[asynchttp] INFO 2018/08/23 15:13:56 asynchttp.go:292: Completed job 2a2c007468c1e6efc1856b799642d9d5 in 58.699579574s >[heketi] ERROR 2018/08/23 15:13:56 /src/github.com/heketi/heketi/apps/glusterfs/operations_manage.go:113: Delete Volume Failed: Unable to execute command on glusterfs-cns-qrfrz: umount: /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_a4e2a2ac1c2aa4a5940e31335c15646e: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[negroni] Started GET /queue/75b58fe5e1a2e0bcc364e43734bae877 >[negroni] Completed 500 Internal Server Error in 111.95µs >[negroni] Started GET /queue/2a2c007468c1e6efc1856b799642d9d5 >[negroni] Completed 500 Internal Server Error in 148.472µs >[negroni] Started GET /blockvolumes >[negroni] Completed 200 OK in 2.954022ms >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 2.898725ms >[negroni] Started GET /volumes/7c63529f0b15f298ebf42de88f10bc57 >[negroni] Completed 200 OK in 2.711631ms >[negroni] Started GET /volumes/88eb861a7ad3f268c2d092be1287cef6 >[negroni] Completed 200 OK in 562.355µs >[negroni] Started GET /volumes/a833a9314f4557589a9d874105357140 >[negroni] Completed 200 OK in 1.185331ms >[negroni] Started GET /volumes/f7e8d452a31fe300d75499749fed9a2b >[negroni] Completed 200 OK in 492.662µs >[negroni] Started GET /blockvolumes >[negroni] Completed 200 OK in 225.055µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 151.907µs >[negroni] Started GET /volumes/7c63529f0b15f298ebf42de88f10bc57 >[negroni] Completed 200 OK in 663.494µs >[negroni] Started GET /volumes/88eb861a7ad3f268c2d092be1287cef6 >[negroni] Completed 200 OK in 643.872µs >[negroni] Started GET /volumes/a833a9314f4557589a9d874105357140 >[negroni] Completed 200 OK in 547.955µs >[negroni] Started GET /volumes/f7e8d452a31fe300d75499749fed9a2b >[negroni] Completed 200 OK in 453.449µs >[heketi] INFO 2018/08/23 15:14:21 Starting Node Health Status refresh >[cmdexec] INFO 2018/08/23 15:14:21 Check Glusterd service status in node vp-ansible-v311-app-cns-0 >[kubeexec] DEBUG 2018/08/23 15:14:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:03 UTC; 1 day 5h ago > Process: 429 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 430 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod630cf184_a5f1_11e8_9a3f_005056a549ca.slice/docker-24ed69a095449460f61deded6aaa6a7de0753c88a3cf0a5e07b85b9cd4a92805.scope/system.slice/glusterd.service > ââ 430 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 852 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id heketidbstorage.10.70.46.26.var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.26-var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.pid -S /var/run/gluster/6a2a03eab842b37b7169d67e7d04d6c2.socket --brick-name /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_cdc9f126aeab5fc4af31b482980e213e/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 6116 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id rally-pp0w0lle2msoqd.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_da0c1db1bf4f2e1d97086d5c353567a3-brick -p /var/run/gluster/vols/rally-pp0w0lle2msoqd/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_da0c1db1bf4f2e1d97086d5c353567a3-brick.pid -S /var/run/gluster/1cfd7f9c5cce7da28a62fa0b83f0bfdf.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_da0c1db1bf4f2e1d97086d5c353567a3/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_da0c1db1bf4f2e1d97086d5c353567a3-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49154 --xlator-option rally-pp0w0lle2msoqd-server.listen-port=49154 > ââ29575 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.pid -S /var/run/gluster/4d30a5ebbf6049f7294f2c96c0889f0e.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_3510a38a8d5d693a612baa22d916237b/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31589 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/13b5dab3bb9b7241888ca2821e42e2c0.socket --xlator-option *replicate*.node-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f >[heketi] INFO 2018/08/23 15:14:21 Periodic health check status: node 064f1f469119e5c69a56a8c81b5fd96a up=true >[cmdexec] INFO 2018/08/23 15:14:21 Check Glusterd service status in node vp-ansible-v311-app-cns-1 >[kubeexec] DEBUG 2018/08/23 15:14:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:55:53 UTC; 1 day 5h ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod631c8e73_a5f1_11e8_9a3f_005056a549ca.slice/docker-0c00732c129d13630118d564e7628c6b1a7329d974aa476e520ae3be5990a263.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 856 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id heketidbstorage.10.70.46.10.var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.10-var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.pid -S /var/run/gluster/eb9a06fb398d5e1d9578e0fc2dc82d85.socket --brick-name /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_55d03facb36899f57cdabd107689eba0/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 5775 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id rally-pp0w0lle2msoqd.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_78962de2b5e56f6867e21f2608906bd4-brick -p /var/run/gluster/vols/rally-pp0w0lle2msoqd/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_78962de2b5e56f6867e21f2608906bd4-brick.pid -S /var/run/gluster/68e8e64015e7e6c91d0690c657659cbc.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_78962de2b5e56f6867e21f2608906bd4/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_78962de2b5e56f6867e21f2608906bd4-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49154 --xlator-option rally-pp0w0lle2msoqd-server.listen-port=49154 > ââ29683 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.pid -S /var/run/gluster/43582dc96bce5442618f73c71cf50bbd.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_3dc24288734a047a8fcc00ab2c7a0974/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31513 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/97a465235ca3ff589c72ec017f4ab551.socket --xlator-option *replicate*.node-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 >[heketi] INFO 2018/08/23 15:14:22 Periodic health check status: node 284f3e3a4c2fd7c0e78b2e759afa9847 up=true >[cmdexec] INFO 2018/08/23 15:14:22 Check Glusterd service status in node vp-ansible-v311-app-cns-2 >[kubeexec] DEBUG 2018/08/23 15:14:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:11 UTC; 1 day 5h ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6310af79_a5f1_11e8_9a3f_005056a549ca.slice/docker-55cc0a963be6c53dd856f9025694fb7897b40fb9112aca2b194ddc65ec54d76f.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 842 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id heketidbstorage.10.70.47.176.var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.176-var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.pid -S /var/run/gluster/fb2b61aa838bbe43a440ffc2986e9bc7.socket --brick-name /var/lib/heketi/mounts/vg_b2c811c50490ee9832f1e0ecfb15f660/brick_7251b95b366cb335bf154a71d67f1250/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ29609 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.pid -S /var/run/gluster/a8eee47923e596454bd4f1c2e122327b.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_88fc94292cf739f1608b4abb3a42a5aa/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31591 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/645e4eea62d3f97263e806a381eeabda.socket --xlator-option *replicate*.node-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d >[heketi] INFO 2018/08/23 15:14:22 Periodic health check status: node f48c1fefdee6989a05560260afcd0a2d up=true >[heketi] INFO 2018/08/23 15:14:22 Cleaned 0 nodes from health cache
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1621436
: 1478272 |
1478275
|
1479234
|
1479235
|
1479236