Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1449068 Details for
Bug 1589070
[Tracker-RHGS-BZ#1631329-BZ#1524336-BZ#1618221] Difference in volume count in heketi and gluster volume list
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
attaching heketi.log
heketi.log (text/plain), 2.53 MB, created by
Apeksha
on 2018-06-08 10:25:49 UTC
(
hide
)
Description:
attaching heketi.log
Filename:
MIME Type:
Creator:
Apeksha
Created:
2018-06-08 10:25:49 UTC
Size:
2.53 MB
patch
obsolete
>Heketi 7.0.0 >[heketi] INFO 2018/06/08 07:37:14 Loaded kubernetes executor >[heketi] INFO 2018/06/08 07:37:14 Block: Auto Create Block Hosting Volume set to true >[heketi] INFO 2018/06/08 07:37:14 Block: New Block Hosting Volume size 100 GB >[heketi] INFO 2018/06/08 07:37:14 GlusterFS Application Loaded >[heketi] INFO 2018/06/08 07:37:14 Started Node Health Cache Monitor >Authorization loaded >Listening on port 8080 >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 07:37:20 Allocating brick set #0 >[negroni] Completed 202 Accepted in 21.131527ms >[asynchttp] INFO 2018/06/08 07:37:20 asynchttp.go:288: Started job 0a3a0f4fc1906a402387b85f96b23c96 >[heketi] INFO 2018/06/08 07:37:20 Started async operation: Create Volume >[negroni] Started GET /queue/0a3a0f4fc1906a402387b85f96b23c96 >[negroni] Completed 200 OK in 118.624µs >[heketi] INFO 2018/06/08 07:37:20 Creating brick b9d88edeed17a4ff0395a5b36b4cd7e0 >[heketi] INFO 2018/06/08 07:37:20 Creating brick 4c6c93fd3801e871481fcd4fbf9b8ba8 >[heketi] INFO 2018/06/08 07:37:20 Creating brick 4481cc39f08c0bdcff489aab80756871 >[kubeexec] DEBUG 2018/06/08 07:37:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4481cc39f08c0bdcff489aab80756871 >Result: >[kubeexec] DEBUG 2018/06/08 07:37:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b9d88edeed17a4ff0395a5b36b4cd7e0 >Result: >[kubeexec] DEBUG 2018/06/08 07:37:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c6c93fd3801e871481fcd4fbf9b8ba8 >Result: >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_d389f0278a774bd7443a09af960961d8/tp_4481cc39f08c0bdcff489aab80756871 --virtualsize 10485760K --name brick_4481cc39f08c0bdcff489aab80756871 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_4481cc39f08c0bdcff489aab80756871" created. >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_b9d88edeed17a4ff0395a5b36b4cd7e0 --virtualsize 10485760K --name brick_b9d88edeed17a4ff0395a5b36b4cd7e0 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_b9d88edeed17a4ff0395a5b36b4cd7e0" created. >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_3a4297677881963e3f80124971d50eea/tp_4c6c93fd3801e871481fcd4fbf9b8ba8 --virtualsize 10485760K --name brick_4c6c93fd3801e871481fcd4fbf9b8ba8 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_4c6c93fd3801e871481fcd4fbf9b8ba8" created. >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_4481cc39f08c0bdcff489aab80756871 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_4481cc39f08c0bdcff489aab80756871 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_b9d88edeed17a4ff0395a5b36b4cd7e0 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_b9d88edeed17a4ff0395a5b36b4cd7e0 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_4c6c93fd3801e871481fcd4fbf9b8ba8 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_4c6c93fd3801e871481fcd4fbf9b8ba8 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_4481cc39f08c0bdcff489aab80756871 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4481cc39f08c0bdcff489aab80756871 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_b9d88edeed17a4ff0395a5b36b4cd7e0 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b9d88edeed17a4ff0395a5b36b4cd7e0 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_4c6c93fd3801e871481fcd4fbf9b8ba8 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c6c93fd3801e871481fcd4fbf9b8ba8 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/0a3a0f4fc1906a402387b85f96b23c96 >[negroni] Completed 200 OK in 133.427µs >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_4481cc39f08c0bdcff489aab80756871 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4481cc39f08c0bdcff489aab80756871 >Result: >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_b9d88edeed17a4ff0395a5b36b4cd7e0 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b9d88edeed17a4ff0395a5b36b4cd7e0 >Result: >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_4c6c93fd3801e871481fcd4fbf9b8ba8 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c6c93fd3801e871481fcd4fbf9b8ba8 >Result: >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4481cc39f08c0bdcff489aab80756871/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b9d88edeed17a4ff0395a5b36b4cd7e0/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c6c93fd3801e871481fcd4fbf9b8ba8/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:37:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2001 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4481cc39f08c0bdcff489aab80756871/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:37:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2001 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b9d88edeed17a4ff0395a5b36b4cd7e0/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:37:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2001 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c6c93fd3801e871481fcd4fbf9b8ba8/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:37:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4481cc39f08c0bdcff489aab80756871/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:37:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b9d88edeed17a4ff0395a5b36b4cd7e0/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:37:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c6c93fd3801e871481fcd4fbf9b8ba8/brick >Result: >[cmdexec] INFO 2018/06/08 07:37:22 Creating volume vol_bcb87c9524580e28814de6f7dc194288 replica 3 >[kubeexec] DEBUG 2018/06/08 07:37:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_bcb87c9524580e28814de6f7dc194288 replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b9d88edeed17a4ff0395a5b36b4cd7e0/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4481cc39f08c0bdcff489aab80756871/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c6c93fd3801e871481fcd4fbf9b8ba8/brick >Result: volume create: vol_bcb87c9524580e28814de6f7dc194288: success: please start the volume to access data >[negroni] Started GET /queue/0a3a0f4fc1906a402387b85f96b23c96 >[negroni] Completed 200 OK in 108.708µs >[negroni] Started GET /queue/0a3a0f4fc1906a402387b85f96b23c96 >[negroni] Completed 200 OK in 187.521µs >[heketi] INFO 2018/06/08 07:37:24 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 07:37:24 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 13min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:37:24 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 07:37:24 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 13min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:37:24 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 07:37:24 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_bcb87c9524580e28814de6f7dc194288 >Result: volume start: vol_bcb87c9524580e28814de6f7dc194288: success >[heketi] INFO 2018/06/08 07:37:24 Create Volume succeeded >[asynchttp] INFO 2018/06/08 07:37:24 asynchttp.go:292: Completed job 0a3a0f4fc1906a402387b85f96b23c96 in 3.889038852s >[negroni] Started GET /queue/0a3a0f4fc1906a402387b85f96b23c96 >[negroni] Completed 303 See Other in 135.871µs >[negroni] Started GET /volumes/bcb87c9524580e28814de6f7dc194288 >[negroni] Completed 200 OK in 10.076402ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 220.398µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 1.039466ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 625.893µs >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 649.202µs >[kubeexec] DEBUG 2018/06/08 07:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 12min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ3014 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > ââ3015 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:37:24 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 07:37:24 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/bcb87c9524580e28814de6f7dc194288 >[negroni] Completed 202 Accepted in 28.149131ms >[asynchttp] INFO 2018/06/08 07:37:50 asynchttp.go:288: Started job f71c2513ed46d147319415ada119329a >[heketi] INFO 2018/06/08 07:37:50 Started async operation: Delete Volume >[negroni] Started GET /queue/f71c2513ed46d147319415ada119329a >[negroni] Completed 200 OK in 135.257µs >[kubeexec] DEBUG 2018/06/08 07:37:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script snapshot list vol_bcb87c9524580e28814de6f7dc194288 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/f71c2513ed46d147319415ada119329a >[negroni] Completed 200 OK in 204.058µs >[negroni] Started GET /queue/f71c2513ed46d147319415ada119329a >[negroni] Completed 200 OK in 119.564µs >[kubeexec] DEBUG 2018/06/08 07:37:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume stop vol_bcb87c9524580e28814de6f7dc194288 force >Result: volume stop: vol_bcb87c9524580e28814de6f7dc194288: success >[kubeexec] DEBUG 2018/06/08 07:37:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume delete vol_bcb87c9524580e28814de6f7dc194288 >Result: volume delete: vol_bcb87c9524580e28814de6f7dc194288: success >[heketi] INFO 2018/06/08 07:37:52 Deleting brick 4481cc39f08c0bdcff489aab80756871 >[heketi] INFO 2018/06/08 07:37:52 Deleting brick 4c6c93fd3801e871481fcd4fbf9b8ba8 >[heketi] INFO 2018/06/08 07:37:52 Deleting brick b9d88edeed17a4ff0395a5b36b4cd7e0 >[kubeexec] DEBUG 2018/06/08 07:37:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c6c93fd3801e871481fcd4fbf9b8ba8 | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_4c6c93fd3801e871481fcd4fbf9b8ba8 >[kubeexec] DEBUG 2018/06/08 07:37:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b9d88edeed17a4ff0395a5b36b4cd7e0 | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_b9d88edeed17a4ff0395a5b36b4cd7e0 >[kubeexec] DEBUG 2018/06/08 07:37:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4481cc39f08c0bdcff489aab80756871 | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_4481cc39f08c0bdcff489aab80756871 >[kubeexec] DEBUG 2018/06/08 07:37:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_b9d88edeed17a4ff0395a5b36b4cd7e0 > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_b9d88edeed17a4ff0395a5b36b4cd7e0 >[kubeexec] DEBUG 2018/06/08 07:37:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_4c6c93fd3801e871481fcd4fbf9b8ba8 > >Result: vg_3a4297677881963e3f80124971d50eea/tp_4c6c93fd3801e871481fcd4fbf9b8ba8 >[negroni] Started GET /queue/f71c2513ed46d147319415ada119329a >[negroni] Completed 200 OK in 94.934µs >[kubeexec] DEBUG 2018/06/08 07:37:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_4481cc39f08c0bdcff489aab80756871 > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_4481cc39f08c0bdcff489aab80756871 >[kubeexec] DEBUG 2018/06/08 07:37:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b9d88edeed17a4ff0395a5b36b4cd7e0 >Result: >[negroni] Started GET /queue/f71c2513ed46d147319415ada119329a >[negroni] Completed 200 OK in 102.101µs >[kubeexec] DEBUG 2018/06/08 07:37:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c6c93fd3801e871481fcd4fbf9b8ba8 >Result: >[kubeexec] DEBUG 2018/06/08 07:37:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4481cc39f08c0bdcff489aab80756871 >Result: >[negroni] Started GET /queue/f71c2513ed46d147319415ada119329a >[negroni] Completed 200 OK in 121.371µs >[kubeexec] DEBUG 2018/06/08 07:37:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_b9d88edeed17a4ff0395a5b36b4cd7e0/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 07:37:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_4c6c93fd3801e871481fcd4fbf9b8ba8/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/f71c2513ed46d147319415ada119329a >[negroni] Completed 200 OK in 121.211µs >[kubeexec] DEBUG 2018/06/08 07:37:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_4481cc39f08c0bdcff489aab80756871/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 07:37:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_b9d88edeed17a4ff0395a5b36b4cd7e0 > >Result: Logical volume "brick_b9d88edeed17a4ff0395a5b36b4cd7e0" successfully removed >[kubeexec] DEBUG 2018/06/08 07:37:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_4c6c93fd3801e871481fcd4fbf9b8ba8 > >Result: Logical volume "brick_4c6c93fd3801e871481fcd4fbf9b8ba8" successfully removed >[negroni] Started GET /queue/f71c2513ed46d147319415ada119329a >[negroni] Completed 200 OK in 125.686µs >[kubeexec] DEBUG 2018/06/08 07:37:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_4481cc39f08c0bdcff489aab80756871 > >Result: Logical volume "brick_4481cc39f08c0bdcff489aab80756871" successfully removed >[kubeexec] DEBUG 2018/06/08 07:37:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_b9d88edeed17a4ff0395a5b36b4cd7e0 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 07:37:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_4c6c93fd3801e871481fcd4fbf9b8ba8 > >Result: 0 >[negroni] Started GET /queue/f71c2513ed46d147319415ada119329a >[negroni] Completed 200 OK in 118.538µs >[kubeexec] DEBUG 2018/06/08 07:37:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_4481cc39f08c0bdcff489aab80756871 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 07:37:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_b9d88edeed17a4ff0395a5b36b4cd7e0 > >Result: Logical volume "tp_b9d88edeed17a4ff0395a5b36b4cd7e0" successfully removed >[negroni] Started GET /queue/f71c2513ed46d147319415ada119329a >[negroni] Completed 200 OK in 161.509µs >[kubeexec] DEBUG 2018/06/08 07:37:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_4c6c93fd3801e871481fcd4fbf9b8ba8 > >Result: Logical volume "tp_4c6c93fd3801e871481fcd4fbf9b8ba8" successfully removed >[kubeexec] DEBUG 2018/06/08 07:37:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_4481cc39f08c0bdcff489aab80756871 > >Result: Logical volume "tp_4481cc39f08c0bdcff489aab80756871" successfully removed >[kubeexec] DEBUG 2018/06/08 07:38:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b9d88edeed17a4ff0395a5b36b4cd7e0 >Result: >[negroni] Started GET /queue/f71c2513ed46d147319415ada119329a >[negroni] Completed 200 OK in 118.425µs >[kubeexec] DEBUG 2018/06/08 07:38:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c6c93fd3801e871481fcd4fbf9b8ba8 >Result: >[kubeexec] DEBUG 2018/06/08 07:38:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4481cc39f08c0bdcff489aab80756871 >Result: >[heketi] INFO 2018/06/08 07:38:00 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 07:38:00 asynchttp.go:292: Completed job f71c2513ed46d147319415ada119329a in 10.566469437s >[negroni] Started GET /queue/f71c2513ed46d147319415ada119329a >[negroni] Completed 204 No Content in 162.342µs >[heketi] INFO 2018/06/08 07:39:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 07:39:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:39:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 15min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5639 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:39:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 07:39:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:39:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 15min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5506 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:39:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 07:39:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:39:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 13min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ3079 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:39:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 07:39:14 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 07:41:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 07:41:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:41:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 17min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5639 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:41:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 07:41:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:41:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 17min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5506 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:41:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 07:41:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:41:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 15min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ3079 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:41:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 07:41:14 Cleaned 0 nodes from health cache >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 07:41:33 Allocating brick set #0 >[negroni] Completed 202 Accepted in 13.669491ms >[asynchttp] INFO 2018/06/08 07:41:33 asynchttp.go:288: Started job acac9d92f0d66231baa05a1d7ba787b0 >[heketi] INFO 2018/06/08 07:41:33 Started async operation: Create Volume >[negroni] Started GET /queue/acac9d92f0d66231baa05a1d7ba787b0 >[negroni] Completed 200 OK in 161.499µs >[heketi] INFO 2018/06/08 07:41:33 Creating brick e3b5c51759943388b0fc47d575ed5445 >[heketi] INFO 2018/06/08 07:41:33 Creating brick 8ae5b886d80baf6275776a0a1bf731ad >[heketi] INFO 2018/06/08 07:41:33 Creating brick 08277a41c0781a41f0ec4d102be47ab8 >[kubeexec] DEBUG 2018/06/08 07:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e3b5c51759943388b0fc47d575ed5445 >Result: >[kubeexec] DEBUG 2018/06/08 07:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_08277a41c0781a41f0ec4d102be47ab8 >Result: >[kubeexec] DEBUG 2018/06/08 07:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8ae5b886d80baf6275776a0a1bf731ad >Result: >[kubeexec] DEBUG 2018/06/08 07:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_3a4297677881963e3f80124971d50eea/tp_8ae5b886d80baf6275776a0a1bf731ad --virtualsize 10485760K --name brick_8ae5b886d80baf6275776a0a1bf731ad >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_8ae5b886d80baf6275776a0a1bf731ad" created. >[kubeexec] DEBUG 2018/06/08 07:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_e3b5c51759943388b0fc47d575ed5445 --virtualsize 10485760K --name brick_e3b5c51759943388b0fc47d575ed5445 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_e3b5c51759943388b0fc47d575ed5445" created. >[kubeexec] DEBUG 2018/06/08 07:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_08277a41c0781a41f0ec4d102be47ab8 --virtualsize 10485760K --name brick_08277a41c0781a41f0ec4d102be47ab8 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_08277a41c0781a41f0ec4d102be47ab8" created. >[kubeexec] DEBUG 2018/06/08 07:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_8ae5b886d80baf6275776a0a1bf731ad >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_8ae5b886d80baf6275776a0a1bf731ad isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 07:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_e3b5c51759943388b0fc47d575ed5445 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_e3b5c51759943388b0fc47d575ed5445 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 07:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_08277a41c0781a41f0ec4d102be47ab8 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_08277a41c0781a41f0ec4d102be47ab8 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 07:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_8ae5b886d80baf6275776a0a1bf731ad /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8ae5b886d80baf6275776a0a1bf731ad xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 07:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_e3b5c51759943388b0fc47d575ed5445 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e3b5c51759943388b0fc47d575ed5445 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 07:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_08277a41c0781a41f0ec4d102be47ab8 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_08277a41c0781a41f0ec4d102be47ab8 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 07:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_8ae5b886d80baf6275776a0a1bf731ad /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8ae5b886d80baf6275776a0a1bf731ad >Result: >[kubeexec] DEBUG 2018/06/08 07:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_e3b5c51759943388b0fc47d575ed5445 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e3b5c51759943388b0fc47d575ed5445 >Result: >[kubeexec] DEBUG 2018/06/08 07:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_08277a41c0781a41f0ec4d102be47ab8 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_08277a41c0781a41f0ec4d102be47ab8 >Result: >[negroni] Started GET /queue/acac9d92f0d66231baa05a1d7ba787b0 >[negroni] Completed 200 OK in 114.777µs >[kubeexec] DEBUG 2018/06/08 07:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8ae5b886d80baf6275776a0a1bf731ad/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_08277a41c0781a41f0ec4d102be47ab8/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e3b5c51759943388b0fc47d575ed5445/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2001 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8ae5b886d80baf6275776a0a1bf731ad/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2001 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_08277a41c0781a41f0ec4d102be47ab8/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2001 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e3b5c51759943388b0fc47d575ed5445/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8ae5b886d80baf6275776a0a1bf731ad/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_08277a41c0781a41f0ec4d102be47ab8/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e3b5c51759943388b0fc47d575ed5445/brick >Result: >[cmdexec] INFO 2018/06/08 07:41:34 Creating volume cns-vol_glusterfs_mongodb5_5d4f9043-6aef-11e8-ab19-005056a5f18a replica 3 >[kubeexec] DEBUG 2018/06/08 07:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume create cns-vol_glusterfs_mongodb5_5d4f9043-6aef-11e8-ab19-005056a5f18a replica 3 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e3b5c51759943388b0fc47d575ed5445/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8ae5b886d80baf6275776a0a1bf731ad/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_08277a41c0781a41f0ec4d102be47ab8/brick >Result: volume create: cns-vol_glusterfs_mongodb5_5d4f9043-6aef-11e8-ab19-005056a5f18a: success: please start the volume to access data >[negroni] Started GET /queue/acac9d92f0d66231baa05a1d7ba787b0 >[negroni] Completed 200 OK in 137.655µs >[negroni] Started GET /queue/acac9d92f0d66231baa05a1d7ba787b0 >[negroni] Completed 200 OK in 127.992µs >[negroni] Started GET /queue/acac9d92f0d66231baa05a1d7ba787b0 >[negroni] Completed 200 OK in 133.277µs >[negroni] Started GET /queue/acac9d92f0d66231baa05a1d7ba787b0 >[negroni] Completed 200 OK in 149.179µs >[kubeexec] DEBUG 2018/06/08 07:41:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume start cns-vol_glusterfs_mongodb5_5d4f9043-6aef-11e8-ab19-005056a5f18a >Result: volume start: cns-vol_glusterfs_mongodb5_5d4f9043-6aef-11e8-ab19-005056a5f18a: success >[heketi] INFO 2018/06/08 07:41:38 Create Volume succeeded >[asynchttp] INFO 2018/06/08 07:41:38 asynchttp.go:292: Completed job acac9d92f0d66231baa05a1d7ba787b0 in 5.804225171s >[negroni] Started GET /queue/acac9d92f0d66231baa05a1d7ba787b0 >[negroni] Completed 303 See Other in 276.781µs >[negroni] Started GET /volumes/24485cd4cac31cedd877a92a75f2397f >[negroni] Completed 200 OK in 2.268844ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 332.308µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 973.725µs >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 723.558µs >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 593.338µs >[negroni] Started DELETE /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 202 Accepted in 10.044724ms >[asynchttp] INFO 2018/06/08 07:42:20 asynchttp.go:288: Started job c62f358eeb3193ad12ba712fd51ac30c >[heketi] INFO 2018/06/08 07:42:20 Started async operation: Delete Volume >[negroni] Started GET /queue/c62f358eeb3193ad12ba712fd51ac30c >[negroni] Completed 200 OK in 151.979µs >[kubeexec] DEBUG 2018/06/08 07:42:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c7d5f0f473cce6914804135f0b8ddcd --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c7d5f0f473cce6914804135f0b8ddcd) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 07:42:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c7d5f0f473cce6914804135f0b8ddcd force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 07:42:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[kubeexec] ERROR 2018/06/08 07:42:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c7d5f0f473cce6914804135f0b8ddcd] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 07:42:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:42:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:42:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:42:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[asynchttp] INFO 2018/06/08 07:42:21 asynchttp.go:292: Completed job c62f358eeb3193ad12ba712fd51ac30c in 721.195243ms >[negroni] Started GET /queue/c62f358eeb3193ad12ba712fd51ac30c >[negroni] Completed 500 Internal Server Error in 138.538µs >[negroni] Started GET /volumes >[negroni] Completed 401 Unauthorized in 185.442µs >[heketi] INFO 2018/06/08 07:43:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 07:43:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:43:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 19min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5986 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:43:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 07:43:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:43:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 19min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5802 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:43:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 07:43:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:43:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 17min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ3374 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:43:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 07:43:14 Cleaned 0 nodes from health cache >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 07:44:39 Allocating brick set #0 >[negroni] Completed 202 Accepted in 18.037248ms >[asynchttp] INFO 2018/06/08 07:44:39 asynchttp.go:288: Started job 79b4dee22f486972d3aef2af0be6cc0b >[heketi] INFO 2018/06/08 07:44:39 Started async operation: Create Volume >[negroni] Started GET /queue/79b4dee22f486972d3aef2af0be6cc0b >[negroni] Completed 200 OK in 162.93µs >[heketi] INFO 2018/06/08 07:44:39 Creating brick 8f853bf979222c0f3eec071f7605f2c2 >[heketi] INFO 2018/06/08 07:44:39 Creating brick 6fe37e54fbf6c9bb1e66e24c971d7a76 >[heketi] INFO 2018/06/08 07:44:39 Creating brick fa4c6b50ba3c59db3faff8234b61e970 >[kubeexec] DEBUG 2018/06/08 07:44:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_8f853bf979222c0f3eec071f7605f2c2 >Result: >[kubeexec] DEBUG 2018/06/08 07:44:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_fa4c6b50ba3c59db3faff8234b61e970 >Result: >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_6fe37e54fbf6c9bb1e66e24c971d7a76 >Result: >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_3a4297677881963e3f80124971d50eea/tp_fa4c6b50ba3c59db3faff8234b61e970 --virtualsize 10485760K --name brick_fa4c6b50ba3c59db3faff8234b61e970 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_fa4c6b50ba3c59db3faff8234b61e970" created. >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_d389f0278a774bd7443a09af960961d8/tp_8f853bf979222c0f3eec071f7605f2c2 --virtualsize 10485760K --name brick_8f853bf979222c0f3eec071f7605f2c2 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_8f853bf979222c0f3eec071f7605f2c2" created. >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_6fe37e54fbf6c9bb1e66e24c971d7a76 --virtualsize 10485760K --name brick_6fe37e54fbf6c9bb1e66e24c971d7a76 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_6fe37e54fbf6c9bb1e66e24c971d7a76" created. >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_8f853bf979222c0f3eec071f7605f2c2 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_8f853bf979222c0f3eec071f7605f2c2 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_6fe37e54fbf6c9bb1e66e24c971d7a76 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_6fe37e54fbf6c9bb1e66e24c971d7a76 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_8f853bf979222c0f3eec071f7605f2c2 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_8f853bf979222c0f3eec071f7605f2c2 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_6fe37e54fbf6c9bb1e66e24c971d7a76 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_6fe37e54fbf6c9bb1e66e24c971d7a76 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_8f853bf979222c0f3eec071f7605f2c2 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_8f853bf979222c0f3eec071f7605f2c2 >Result: >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_fa4c6b50ba3c59db3faff8234b61e970 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_fa4c6b50ba3c59db3faff8234b61e970 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_6fe37e54fbf6c9bb1e66e24c971d7a76 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_6fe37e54fbf6c9bb1e66e24c971d7a76 >Result: >[negroni] Started GET /queue/79b4dee22f486972d3aef2af0be6cc0b >[negroni] Completed 200 OK in 101.912µs >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_8f853bf979222c0f3eec071f7605f2c2/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_fa4c6b50ba3c59db3faff8234b61e970 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_fa4c6b50ba3c59db3faff8234b61e970 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_6fe37e54fbf6c9bb1e66e24c971d7a76/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2001 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_8f853bf979222c0f3eec071f7605f2c2/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:44:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2001 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_6fe37e54fbf6c9bb1e66e24c971d7a76/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:44:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_fa4c6b50ba3c59db3faff8234b61e970 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_fa4c6b50ba3c59db3faff8234b61e970 >Result: >[kubeexec] DEBUG 2018/06/08 07:44:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_8f853bf979222c0f3eec071f7605f2c2/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:44:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_6fe37e54fbf6c9bb1e66e24c971d7a76/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:44:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_fa4c6b50ba3c59db3faff8234b61e970/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:44:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2001 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_fa4c6b50ba3c59db3faff8234b61e970/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:44:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_fa4c6b50ba3c59db3faff8234b61e970/brick >Result: >[cmdexec] INFO 2018/06/08 07:44:41 Creating volume cns-vol_glusterfs_mongodb5_cca32e68-6aef-11e8-ab19-005056a5f18a replica 3 >[kubeexec] DEBUG 2018/06/08 07:44:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create cns-vol_glusterfs_mongodb5_cca32e68-6aef-11e8-ab19-005056a5f18a replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_6fe37e54fbf6c9bb1e66e24c971d7a76/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_8f853bf979222c0f3eec071f7605f2c2/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_fa4c6b50ba3c59db3faff8234b61e970/brick >Result: volume create: cns-vol_glusterfs_mongodb5_cca32e68-6aef-11e8-ab19-005056a5f18a: success: please start the volume to access data >[negroni] Started GET /queue/79b4dee22f486972d3aef2af0be6cc0b >[negroni] Completed 200 OK in 118.906µs >[negroni] Started GET /queue/79b4dee22f486972d3aef2af0be6cc0b >[negroni] Completed 200 OK in 114.343µs >[negroni] Started GET /queue/79b4dee22f486972d3aef2af0be6cc0b >[negroni] Completed 200 OK in 121.014µs >[negroni] Started GET /queue/79b4dee22f486972d3aef2af0be6cc0b >[negroni] Completed 200 OK in 187.376µs >[kubeexec] DEBUG 2018/06/08 07:44:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start cns-vol_glusterfs_mongodb5_cca32e68-6aef-11e8-ab19-005056a5f18a >Result: volume start: cns-vol_glusterfs_mongodb5_cca32e68-6aef-11e8-ab19-005056a5f18a: success >[heketi] INFO 2018/06/08 07:44:44 Create Volume succeeded >[asynchttp] INFO 2018/06/08 07:44:44 asynchttp.go:292: Completed job 79b4dee22f486972d3aef2af0be6cc0b in 5.037729966s >[negroni] Started GET /queue/79b4dee22f486972d3aef2af0be6cc0b >[negroni] Completed 303 See Other in 139.004µs >[negroni] Started GET /volumes/c0229543f8974d87bd07caae4493ca58 >[negroni] Completed 200 OK in 2.265835ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 174.272µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 1.048723ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.033049ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 829.401µs >[negroni] Started DELETE /volumes/24485cd4cac31cedd877a92a75f2397f >[negroni] Completed 202 Accepted in 10.474137ms >[asynchttp] INFO 2018/06/08 07:44:50 asynchttp.go:288: Started job ef78edaa984b016aa2ccbe1ec46ac389 >[heketi] INFO 2018/06/08 07:44:50 Started async operation: Delete Volume >[negroni] Started GET /queue/ef78edaa984b016aa2ccbe1ec46ac389 >[negroni] Completed 200 OK in 207.625µs >[kubeexec] DEBUG 2018/06/08 07:44:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list cns-vol_glusterfs_mongodb5_5d4f9043-6aef-11e8-ab19-005056a5f18a --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/ef78edaa984b016aa2ccbe1ec46ac389 >[negroni] Completed 200 OK in 124.076µs >[negroni] Started GET /queue/ef78edaa984b016aa2ccbe1ec46ac389 >[negroni] Completed 200 OK in 141.987µs >[kubeexec] DEBUG 2018/06/08 07:44:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume stop cns-vol_glusterfs_mongodb5_5d4f9043-6aef-11e8-ab19-005056a5f18a force >Result: volume stop: cns-vol_glusterfs_mongodb5_5d4f9043-6aef-11e8-ab19-005056a5f18a: success >[kubeexec] DEBUG 2018/06/08 07:44:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume delete cns-vol_glusterfs_mongodb5_5d4f9043-6aef-11e8-ab19-005056a5f18a >Result: volume delete: cns-vol_glusterfs_mongodb5_5d4f9043-6aef-11e8-ab19-005056a5f18a: success >[heketi] INFO 2018/06/08 07:44:53 Deleting brick 8ae5b886d80baf6275776a0a1bf731ad >[heketi] INFO 2018/06/08 07:44:53 Deleting brick e3b5c51759943388b0fc47d575ed5445 >[heketi] INFO 2018/06/08 07:44:53 Deleting brick 08277a41c0781a41f0ec4d102be47ab8 >[negroni] Started GET /queue/ef78edaa984b016aa2ccbe1ec46ac389 >[negroni] Completed 200 OK in 127.611µs >[kubeexec] DEBUG 2018/06/08 07:44:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e3b5c51759943388b0fc47d575ed5445 | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_e3b5c51759943388b0fc47d575ed5445 >[kubeexec] DEBUG 2018/06/08 07:44:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_08277a41c0781a41f0ec4d102be47ab8 | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_08277a41c0781a41f0ec4d102be47ab8 >[kubeexec] DEBUG 2018/06/08 07:44:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8ae5b886d80baf6275776a0a1bf731ad | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_8ae5b886d80baf6275776a0a1bf731ad >[kubeexec] DEBUG 2018/06/08 07:44:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_e3b5c51759943388b0fc47d575ed5445 > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_e3b5c51759943388b0fc47d575ed5445 >[kubeexec] DEBUG 2018/06/08 07:44:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_08277a41c0781a41f0ec4d102be47ab8 > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_08277a41c0781a41f0ec4d102be47ab8 >[kubeexec] DEBUG 2018/06/08 07:44:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_8ae5b886d80baf6275776a0a1bf731ad > >Result: vg_3a4297677881963e3f80124971d50eea/tp_8ae5b886d80baf6275776a0a1bf731ad >[negroni] Started GET /queue/ef78edaa984b016aa2ccbe1ec46ac389 >[negroni] Completed 200 OK in 104.172µs >[kubeexec] DEBUG 2018/06/08 07:44:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e3b5c51759943388b0fc47d575ed5445 >Result: >[kubeexec] DEBUG 2018/06/08 07:44:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_08277a41c0781a41f0ec4d102be47ab8 >Result: >[negroni] Started GET /queue/ef78edaa984b016aa2ccbe1ec46ac389 >[negroni] Completed 200 OK in 134.163µs >[kubeexec] DEBUG 2018/06/08 07:44:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8ae5b886d80baf6275776a0a1bf731ad >Result: >[kubeexec] DEBUG 2018/06/08 07:44:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_e3b5c51759943388b0fc47d575ed5445/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 07:44:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_08277a41c0781a41f0ec4d102be47ab8/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/ef78edaa984b016aa2ccbe1ec46ac389 >[negroni] Completed 200 OK in 116.018µs >[kubeexec] DEBUG 2018/06/08 07:44:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_8ae5b886d80baf6275776a0a1bf731ad/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 07:44:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_e3b5c51759943388b0fc47d575ed5445 > >Result: Logical volume "brick_e3b5c51759943388b0fc47d575ed5445" successfully removed >[kubeexec] DEBUG 2018/06/08 07:44:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_08277a41c0781a41f0ec4d102be47ab8 > >Result: Logical volume "brick_08277a41c0781a41f0ec4d102be47ab8" successfully removed >[negroni] Started GET /queue/ef78edaa984b016aa2ccbe1ec46ac389 >[negroni] Completed 200 OK in 101.951µs >[kubeexec] DEBUG 2018/06/08 07:44:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_8ae5b886d80baf6275776a0a1bf731ad > >Result: Logical volume "brick_8ae5b886d80baf6275776a0a1bf731ad" successfully removed >[kubeexec] DEBUG 2018/06/08 07:44:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_e3b5c51759943388b0fc47d575ed5445 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 07:44:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_08277a41c0781a41f0ec4d102be47ab8 > >Result: 0 >[negroni] Started GET /queue/ef78edaa984b016aa2ccbe1ec46ac389 >[negroni] Completed 200 OK in 93.587µs >[kubeexec] DEBUG 2018/06/08 07:44:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_8ae5b886d80baf6275776a0a1bf731ad > >Result: 0 >[kubeexec] DEBUG 2018/06/08 07:44:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_e3b5c51759943388b0fc47d575ed5445 > >Result: Logical volume "tp_e3b5c51759943388b0fc47d575ed5445" successfully removed >[negroni] Started GET /queue/ef78edaa984b016aa2ccbe1ec46ac389 >[negroni] Completed 200 OK in 115.438µs >[kubeexec] DEBUG 2018/06/08 07:44:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_08277a41c0781a41f0ec4d102be47ab8 > >Result: Logical volume "tp_08277a41c0781a41f0ec4d102be47ab8" successfully removed >[kubeexec] DEBUG 2018/06/08 07:45:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_8ae5b886d80baf6275776a0a1bf731ad > >Result: Logical volume "tp_8ae5b886d80baf6275776a0a1bf731ad" successfully removed >[negroni] Started GET /queue/ef78edaa984b016aa2ccbe1ec46ac389 >[negroni] Completed 200 OK in 113.359µs >[kubeexec] DEBUG 2018/06/08 07:45:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e3b5c51759943388b0fc47d575ed5445 >Result: >[kubeexec] DEBUG 2018/06/08 07:45:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_08277a41c0781a41f0ec4d102be47ab8 >Result: >[negroni] Started GET /volumes >[negroni] Completed 401 Unauthorized in 111.511µs >[kubeexec] DEBUG 2018/06/08 07:45:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8ae5b886d80baf6275776a0a1bf731ad >Result: >[heketi] INFO 2018/06/08 07:45:01 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 07:45:01 asynchttp.go:292: Completed job ef78edaa984b016aa2ccbe1ec46ac389 in 10.670538755s >[negroni] Started GET /queue/ef78edaa984b016aa2ccbe1ec46ac389 >[negroni] Completed 204 No Content in 111.504µs >[negroni] Started DELETE /volumes/24485cd4cac31cedd877a92a75f2397f >[negroni] Completed 404 Not Found in 1.871355ms >[negroni] Started GET /volumes >[negroni] Completed 401 Unauthorized in 189.343µs >[heketi] INFO 2018/06/08 07:45:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 07:45:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:45:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 21min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6262 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:45:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 07:45:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:45:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 21min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6078 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:45:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 07:45:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:45:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 19min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ3763 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:45:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 07:45:14 Cleaned 0 nodes from health cache >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.249431ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.494658ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 940.927µs >[negroni] Started GET /volumes/c0229543f8974d87bd07caae4493ca58 >[negroni] Completed 200 OK in 1.004026ms >[negroni] Started GET /volumes/e6860021031e7cf362c2b3824d6c351f >[negroni] Completed 200 OK in 586.985µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 197.659µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 702.563µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 550.77µs >[negroni] Started GET /volumes/c0229543f8974d87bd07caae4493ca58 >[negroni] Completed 200 OK in 598.635µs >[negroni] Started GET /volumes/e6860021031e7cf362c2b3824d6c351f >[negroni] Completed 200 OK in 566.078µs >[negroni] Started GET /volumes >[negroni] Completed 401 Unauthorized in 121.691µs >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 07:46:17 Allocating brick set #0 >[negroni] Completed 202 Accepted in 25.34431ms >[asynchttp] INFO 2018/06/08 07:46:17 asynchttp.go:288: Started job f453822daa8f3bafba9a9883210f917c >[heketi] INFO 2018/06/08 07:46:17 Started async operation: Create Volume >[negroni] Started GET /queue/f453822daa8f3bafba9a9883210f917c >[negroni] Completed 200 OK in 118.227µs >[heketi] INFO 2018/06/08 07:46:17 Creating brick 43ffab10bdb18f3bcce61f5e0c04684f >[heketi] INFO 2018/06/08 07:46:17 Creating brick fea99506a7e983d4d4765dc1d2462625 >[heketi] INFO 2018/06/08 07:46:17 Creating brick b55f73cf47bf2a1fcb341a2e01fd79db >[kubeexec] DEBUG 2018/06/08 07:46:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b55f73cf47bf2a1fcb341a2e01fd79db >Result: >[kubeexec] DEBUG 2018/06/08 07:46:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_fea99506a7e983d4d4765dc1d2462625 >Result: >[kubeexec] DEBUG 2018/06/08 07:46:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_43ffab10bdb18f3bcce61f5e0c04684f >Result: >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_43ffab10bdb18f3bcce61f5e0c04684f --virtualsize 10485760K --name brick_43ffab10bdb18f3bcce61f5e0c04684f >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_43ffab10bdb18f3bcce61f5e0c04684f" created. >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_3a4297677881963e3f80124971d50eea/tp_b55f73cf47bf2a1fcb341a2e01fd79db --virtualsize 10485760K --name brick_b55f73cf47bf2a1fcb341a2e01fd79db >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_b55f73cf47bf2a1fcb341a2e01fd79db" created. >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_fea99506a7e983d4d4765dc1d2462625 --virtualsize 10485760K --name brick_fea99506a7e983d4d4765dc1d2462625 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_fea99506a7e983d4d4765dc1d2462625" created. >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_b55f73cf47bf2a1fcb341a2e01fd79db >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_b55f73cf47bf2a1fcb341a2e01fd79db isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_43ffab10bdb18f3bcce61f5e0c04684f >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_43ffab10bdb18f3bcce61f5e0c04684f isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_fea99506a7e983d4d4765dc1d2462625 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_fea99506a7e983d4d4765dc1d2462625 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_43ffab10bdb18f3bcce61f5e0c04684f /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_43ffab10bdb18f3bcce61f5e0c04684f xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_b55f73cf47bf2a1fcb341a2e01fd79db /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b55f73cf47bf2a1fcb341a2e01fd79db xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_fea99506a7e983d4d4765dc1d2462625 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_fea99506a7e983d4d4765dc1d2462625 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_43ffab10bdb18f3bcce61f5e0c04684f /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_43ffab10bdb18f3bcce61f5e0c04684f >Result: >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_b55f73cf47bf2a1fcb341a2e01fd79db /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b55f73cf47bf2a1fcb341a2e01fd79db >Result: >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_fea99506a7e983d4d4765dc1d2462625 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_fea99506a7e983d4d4765dc1d2462625 >Result: >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_43ffab10bdb18f3bcce61f5e0c04684f/brick >Result: >[negroni] Started GET /queue/f453822daa8f3bafba9a9883210f917c >[negroni] Completed 200 OK in 99.179µs >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b55f73cf47bf2a1fcb341a2e01fd79db/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_fea99506a7e983d4d4765dc1d2462625/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2001 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_43ffab10bdb18f3bcce61f5e0c04684f/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2001 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b55f73cf47bf2a1fcb341a2e01fd79db/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2001 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_fea99506a7e983d4d4765dc1d2462625/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_43ffab10bdb18f3bcce61f5e0c04684f/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b55f73cf47bf2a1fcb341a2e01fd79db/brick >Result: >[kubeexec] DEBUG 2018/06/08 07:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_fea99506a7e983d4d4765dc1d2462625/brick >Result: >[cmdexec] INFO 2018/06/08 07:46:18 Creating volume cns-vol_glusterfs_mongodb5_06f1079a-6af0-11e8-ab19-005056a5f18a replica 3 >[kubeexec] DEBUG 2018/06/08 07:46:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create cns-vol_glusterfs_mongodb5_06f1079a-6af0-11e8-ab19-005056a5f18a replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_fea99506a7e983d4d4765dc1d2462625/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_43ffab10bdb18f3bcce61f5e0c04684f/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b55f73cf47bf2a1fcb341a2e01fd79db/brick >Result: volume create: cns-vol_glusterfs_mongodb5_06f1079a-6af0-11e8-ab19-005056a5f18a: success: please start the volume to access data >[negroni] Started GET /queue/f453822daa8f3bafba9a9883210f917c >[negroni] Completed 200 OK in 201.816µs >[negroni] Started GET /queue/f453822daa8f3bafba9a9883210f917c >[negroni] Completed 200 OK in 111.033µs >[negroni] Started DELETE /volumes/c0229543f8974d87bd07caae4493ca58 >[negroni] Completed 202 Accepted in 13.399124ms >[asynchttp] INFO 2018/06/08 07:46:20 asynchttp.go:288: Started job b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[heketi] INFO 2018/06/08 07:46:20 Started async operation: Delete Volume >[negroni] Started GET /queue/b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[negroni] Completed 200 OK in 125.365µs >[kubeexec] DEBUG 2018/06/08 07:46:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start cns-vol_glusterfs_mongodb5_06f1079a-6af0-11e8-ab19-005056a5f18a >Result: volume start: cns-vol_glusterfs_mongodb5_06f1079a-6af0-11e8-ab19-005056a5f18a: success >[heketi] INFO 2018/06/08 07:46:21 Create Volume succeeded >[asynchttp] INFO 2018/06/08 07:46:21 asynchttp.go:292: Completed job f453822daa8f3bafba9a9883210f917c in 3.879387245s >[negroni] Started GET /queue/f453822daa8f3bafba9a9883210f917c >[negroni] Completed 303 See Other in 174.248µs >[negroni] Started GET /volumes/ce1dbe38a55233b6023eaf7f32109115 >[negroni] Completed 200 OK in 2.285885ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 362.113µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 1.186475ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.02557ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 755.982µs >[kubeexec] DEBUG 2018/06/08 07:46:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list cns-vol_glusterfs_mongodb5_cca32e68-6aef-11e8-ab19-005056a5f18a --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[negroni] Completed 200 OK in 122.337µs >[negroni] Started GET /queue/b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[negroni] Completed 200 OK in 154.279µs >[negroni] Started GET /queue/b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[negroni] Completed 200 OK in 99.47µs >[negroni] Started GET /queue/b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[negroni] Completed 200 OK in 145.777µs >[negroni] Started GET /queue/b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[negroni] Completed 200 OK in 157.21µs >[kubeexec] DEBUG 2018/06/08 07:46:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume stop cns-vol_glusterfs_mongodb5_cca32e68-6aef-11e8-ab19-005056a5f18a force >Result: volume stop: cns-vol_glusterfs_mongodb5_cca32e68-6aef-11e8-ab19-005056a5f18a: success >[kubeexec] DEBUG 2018/06/08 07:46:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume delete cns-vol_glusterfs_mongodb5_cca32e68-6aef-11e8-ab19-005056a5f18a >Result: volume delete: cns-vol_glusterfs_mongodb5_cca32e68-6aef-11e8-ab19-005056a5f18a: success >[heketi] INFO 2018/06/08 07:46:26 Deleting brick 6fe37e54fbf6c9bb1e66e24c971d7a76 >[heketi] INFO 2018/06/08 07:46:26 Deleting brick 8f853bf979222c0f3eec071f7605f2c2 >[heketi] INFO 2018/06/08 07:46:26 Deleting brick fa4c6b50ba3c59db3faff8234b61e970 >[kubeexec] DEBUG 2018/06/08 07:46:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_8f853bf979222c0f3eec071f7605f2c2 | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_8f853bf979222c0f3eec071f7605f2c2 >[kubeexec] DEBUG 2018/06/08 07:46:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_6fe37e54fbf6c9bb1e66e24c971d7a76 | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_6fe37e54fbf6c9bb1e66e24c971d7a76 >[kubeexec] DEBUG 2018/06/08 07:46:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_fa4c6b50ba3c59db3faff8234b61e970 | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_fa4c6b50ba3c59db3faff8234b61e970 >[kubeexec] DEBUG 2018/06/08 07:46:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_8f853bf979222c0f3eec071f7605f2c2 > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_8f853bf979222c0f3eec071f7605f2c2 >[kubeexec] DEBUG 2018/06/08 07:46:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_6fe37e54fbf6c9bb1e66e24c971d7a76 > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_6fe37e54fbf6c9bb1e66e24c971d7a76 >[negroni] Started GET /queue/b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[negroni] Completed 200 OK in 128.029µs >[kubeexec] DEBUG 2018/06/08 07:46:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_fa4c6b50ba3c59db3faff8234b61e970 > >Result: vg_3a4297677881963e3f80124971d50eea/tp_fa4c6b50ba3c59db3faff8234b61e970 >[kubeexec] DEBUG 2018/06/08 07:46:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_8f853bf979222c0f3eec071f7605f2c2 >Result: >[kubeexec] DEBUG 2018/06/08 07:46:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_6fe37e54fbf6c9bb1e66e24c971d7a76 >Result: >[negroni] Started GET /queue/b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[negroni] Completed 200 OK in 108.182µs >[kubeexec] DEBUG 2018/06/08 07:46:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_fa4c6b50ba3c59db3faff8234b61e970 >Result: >[kubeexec] DEBUG 2018/06/08 07:46:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_8f853bf979222c0f3eec071f7605f2c2/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[negroni] Completed 200 OK in 97.21µs >[kubeexec] DEBUG 2018/06/08 07:46:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_6fe37e54fbf6c9bb1e66e24c971d7a76/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 07:46:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_fa4c6b50ba3c59db3faff8234b61e970/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[negroni] Completed 200 OK in 119.315µs >[kubeexec] DEBUG 2018/06/08 07:46:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_8f853bf979222c0f3eec071f7605f2c2 > >Result: Logical volume "brick_8f853bf979222c0f3eec071f7605f2c2" successfully removed >[kubeexec] DEBUG 2018/06/08 07:46:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_6fe37e54fbf6c9bb1e66e24c971d7a76 > >Result: Logical volume "brick_6fe37e54fbf6c9bb1e66e24c971d7a76" successfully removed >[negroni] Started GET /queue/b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[negroni] Completed 200 OK in 114.714µs >[kubeexec] DEBUG 2018/06/08 07:46:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_fa4c6b50ba3c59db3faff8234b61e970 > >Result: Logical volume "brick_fa4c6b50ba3c59db3faff8234b61e970" successfully removed >[kubeexec] DEBUG 2018/06/08 07:46:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_8f853bf979222c0f3eec071f7605f2c2 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 07:46:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_6fe37e54fbf6c9bb1e66e24c971d7a76 > >Result: 0 >[negroni] Started GET /queue/b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[negroni] Completed 200 OK in 130.245µs >[kubeexec] DEBUG 2018/06/08 07:46:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_fa4c6b50ba3c59db3faff8234b61e970 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 07:46:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_8f853bf979222c0f3eec071f7605f2c2 > >Result: Logical volume "tp_8f853bf979222c0f3eec071f7605f2c2" successfully removed >[kubeexec] DEBUG 2018/06/08 07:46:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_6fe37e54fbf6c9bb1e66e24c971d7a76 > >Result: Logical volume "tp_6fe37e54fbf6c9bb1e66e24c971d7a76" successfully removed >[negroni] Started GET /queue/b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[negroni] Completed 200 OK in 103.826µs >[kubeexec] DEBUG 2018/06/08 07:46:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_fa4c6b50ba3c59db3faff8234b61e970 > >Result: Logical volume "tp_fa4c6b50ba3c59db3faff8234b61e970" successfully removed >[kubeexec] DEBUG 2018/06/08 07:46:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_8f853bf979222c0f3eec071f7605f2c2 >Result: >[kubeexec] DEBUG 2018/06/08 07:46:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_6fe37e54fbf6c9bb1e66e24c971d7a76 >Result: >[negroni] Started GET /queue/b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[negroni] Completed 200 OK in 133.981µs >[kubeexec] DEBUG 2018/06/08 07:46:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_fa4c6b50ba3c59db3faff8234b61e970 >Result: >[heketi] INFO 2018/06/08 07:46:33 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 07:46:33 asynchttp.go:292: Completed job b2ad0d8c3ceb29259d4f9cbc0ae00dc9 in 13.273784177s >[negroni] Started GET /queue/b2ad0d8c3ceb29259d4f9cbc0ae00dc9 >[negroni] Completed 204 No Content in 158.184µs >[negroni] Started DELETE /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 202 Accepted in 12.048504ms >[asynchttp] INFO 2018/06/08 07:46:50 asynchttp.go:288: Started job c8dde8dd483c1ac657f48f19aa7fa167 >[heketi] INFO 2018/06/08 07:46:50 Started async operation: Delete Volume >[negroni] Started GET /queue/c8dde8dd483c1ac657f48f19aa7fa167 >[negroni] Completed 200 OK in 146.932µs >[kubeexec] DEBUG 2018/06/08 07:46:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c7d5f0f473cce6914804135f0b8ddcd --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c7d5f0f473cce6914804135f0b8ddcd) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 07:46:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c7d5f0f473cce6914804135f0b8ddcd force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 07:46:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[kubeexec] ERROR 2018/06/08 07:46:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c7d5f0f473cce6914804135f0b8ddcd] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 07:46:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:46:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:46:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:46:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[asynchttp] INFO 2018/06/08 07:46:51 asynchttp.go:292: Completed job c8dde8dd483c1ac657f48f19aa7fa167 in 735.20644ms >[negroni] Started GET /queue/c8dde8dd483c1ac657f48f19aa7fa167 >[negroni] Completed 500 Internal Server Error in 126.96µs >[heketi] INFO 2018/06/08 07:47:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 07:47:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:47:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 23min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6536 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:47:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 07:47:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:47:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 23min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6351 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:47:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 07:47:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:47:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 21min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ4123 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:47:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 07:47:14 Cleaned 0 nodes from health cache >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.665737ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.839529ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 959.701µs >[negroni] Started GET /volumes/ce1dbe38a55233b6023eaf7f32109115 >[negroni] Completed 200 OK in 1.579851ms >[negroni] Started GET /volumes/e6860021031e7cf362c2b3824d6c351f >[negroni] Completed 200 OK in 602.239µs >[negroni] Started DELETE /volumes/e6860021031e7cf362c2b3824d6c351f >[negroni] Completed 202 Accepted in 13.882575ms >[asynchttp] INFO 2018/06/08 07:47:54 asynchttp.go:288: Started job 1b6a09cb7439ecd97f5d0426af64b3af >[heketi] INFO 2018/06/08 07:47:54 Started async operation: Delete Volume >[negroni] Started GET /queue/1b6a09cb7439ecd97f5d0426af64b3af >[negroni] Completed 200 OK in 93.171µs >[kubeexec] DEBUG 2018/06/08 07:47:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_e6860021031e7cf362c2b3824d6c351f --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/1b6a09cb7439ecd97f5d0426af64b3af >[negroni] Completed 200 OK in 117.043µs >[negroni] Started GET /queue/1b6a09cb7439ecd97f5d0426af64b3af >[negroni] Completed 200 OK in 114.347µs >[kubeexec] DEBUG 2018/06/08 07:47:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume stop vol_e6860021031e7cf362c2b3824d6c351f force >Result: volume stop: vol_e6860021031e7cf362c2b3824d6c351f: success >[kubeexec] DEBUG 2018/06/08 07:47:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume delete vol_e6860021031e7cf362c2b3824d6c351f >Result: volume delete: vol_e6860021031e7cf362c2b3824d6c351f: success >[heketi] INFO 2018/06/08 07:47:56 Deleting brick 12c589a01d51afb92169220f3344b7df >[heketi] INFO 2018/06/08 07:47:56 Deleting brick 5174520a82f2e1505984f6e152a16133 >[heketi] INFO 2018/06/08 07:47:56 Deleting brick 548c0a969ccc1c76bea99bded13b1297 >[kubeexec] DEBUG 2018/06/08 07:47:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_548c0a969ccc1c76bea99bded13b1297 | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_548c0a969ccc1c76bea99bded13b1297 >[kubeexec] DEBUG 2018/06/08 07:47:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_12c589a01d51afb92169220f3344b7df | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_12c589a01d51afb92169220f3344b7df >[kubeexec] DEBUG 2018/06/08 07:47:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5174520a82f2e1505984f6e152a16133 | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_5174520a82f2e1505984f6e152a16133 >[negroni] Started GET /queue/1b6a09cb7439ecd97f5d0426af64b3af >[negroni] Completed 200 OK in 139.662µs >[kubeexec] DEBUG 2018/06/08 07:47:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_548c0a969ccc1c76bea99bded13b1297 > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_548c0a969ccc1c76bea99bded13b1297 >[kubeexec] DEBUG 2018/06/08 07:47:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_12c589a01d51afb92169220f3344b7df > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_12c589a01d51afb92169220f3344b7df >[kubeexec] DEBUG 2018/06/08 07:47:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_5174520a82f2e1505984f6e152a16133 > >Result: vg_3a4297677881963e3f80124971d50eea/tp_5174520a82f2e1505984f6e152a16133 >[negroni] Started GET /queue/1b6a09cb7439ecd97f5d0426af64b3af >[negroni] Completed 200 OK in 170.554µs >[kubeexec] DEBUG 2018/06/08 07:47:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_548c0a969ccc1c76bea99bded13b1297 >Result: >[kubeexec] DEBUG 2018/06/08 07:47:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_12c589a01d51afb92169220f3344b7df >Result: >[kubeexec] DEBUG 2018/06/08 07:47:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5174520a82f2e1505984f6e152a16133 >Result: >[negroni] Started GET /queue/1b6a09cb7439ecd97f5d0426af64b3af >[negroni] Completed 200 OK in 155.361µs >[kubeexec] DEBUG 2018/06/08 07:47:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_548c0a969ccc1c76bea99bded13b1297/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 07:47:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_12c589a01d51afb92169220f3344b7df/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 07:47:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_5174520a82f2e1505984f6e152a16133/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/1b6a09cb7439ecd97f5d0426af64b3af >[negroni] Completed 200 OK in 101.547µs >[kubeexec] DEBUG 2018/06/08 07:48:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_548c0a969ccc1c76bea99bded13b1297 > >Result: Logical volume "brick_548c0a969ccc1c76bea99bded13b1297" successfully removed >[kubeexec] DEBUG 2018/06/08 07:48:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_12c589a01d51afb92169220f3344b7df > >Result: Logical volume "brick_12c589a01d51afb92169220f3344b7df" successfully removed >[negroni] Started GET /queue/1b6a09cb7439ecd97f5d0426af64b3af >[negroni] Completed 200 OK in 119.092µs >[kubeexec] DEBUG 2018/06/08 07:48:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_5174520a82f2e1505984f6e152a16133 > >Result: Logical volume "brick_5174520a82f2e1505984f6e152a16133" successfully removed >[kubeexec] DEBUG 2018/06/08 07:48:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_548c0a969ccc1c76bea99bded13b1297 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 07:48:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_12c589a01d51afb92169220f3344b7df > >Result: 0 >[negroni] Started GET /queue/1b6a09cb7439ecd97f5d0426af64b3af >[negroni] Completed 200 OK in 88.298µs >[kubeexec] DEBUG 2018/06/08 07:48:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_5174520a82f2e1505984f6e152a16133 > >Result: 0 >[negroni] Started GET /queue/1b6a09cb7439ecd97f5d0426af64b3af >[negroni] Completed 200 OK in 173.095µs >[kubeexec] DEBUG 2018/06/08 07:48:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_548c0a969ccc1c76bea99bded13b1297 > >Result: Logical volume "tp_548c0a969ccc1c76bea99bded13b1297" successfully removed >[kubeexec] DEBUG 2018/06/08 07:48:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_12c589a01d51afb92169220f3344b7df > >Result: Logical volume "tp_12c589a01d51afb92169220f3344b7df" successfully removed >[kubeexec] DEBUG 2018/06/08 07:48:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_5174520a82f2e1505984f6e152a16133 > >Result: Logical volume "tp_5174520a82f2e1505984f6e152a16133" successfully removed >[kubeexec] DEBUG 2018/06/08 07:48:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_548c0a969ccc1c76bea99bded13b1297 >Result: >[negroni] Started GET /queue/1b6a09cb7439ecd97f5d0426af64b3af >[negroni] Completed 200 OK in 115.613µs >[kubeexec] DEBUG 2018/06/08 07:48:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_12c589a01d51afb92169220f3344b7df >Result: >[kubeexec] DEBUG 2018/06/08 07:48:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5174520a82f2e1505984f6e152a16133 >Result: >[heketi] INFO 2018/06/08 07:48:04 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 07:48:04 asynchttp.go:292: Completed job 1b6a09cb7439ecd97f5d0426af64b3af in 10.602868697s >[negroni] Started GET /queue/1b6a09cb7439ecd97f5d0426af64b3af >[negroni] Completed 204 No Content in 116.524µs >[negroni] Started DELETE /volumes/e6860021031e7cf362c2b3824d6c351f >[negroni] Completed 404 Not Found in 1.910083ms >[negroni] Started DELETE /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 202 Accepted in 12.786221ms >[asynchttp] INFO 2018/06/08 07:49:05 asynchttp.go:288: Started job fc30b9aee857144b39e7e32e0c3e98e8 >[heketi] INFO 2018/06/08 07:49:05 Started async operation: Delete Volume >[negroni] Started GET /queue/fc30b9aee857144b39e7e32e0c3e98e8 >[negroni] Completed 200 OK in 100.497µs >[kubeexec] DEBUG 2018/06/08 07:49:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c7d5f0f473cce6914804135f0b8ddcd --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c7d5f0f473cce6914804135f0b8ddcd) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 07:49:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c7d5f0f473cce6914804135f0b8ddcd force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 07:49:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[kubeexec] ERROR 2018/06/08 07:49:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c7d5f0f473cce6914804135f0b8ddcd] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 07:49:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:49:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:49:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[asynchttp] INFO 2018/06/08 07:49:06 asynchttp.go:292: Completed job fc30b9aee857144b39e7e32e0c3e98e8 in 763.812826ms >[heketi] ERROR 2018/06/08 07:49:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[negroni] Started GET /queue/fc30b9aee857144b39e7e32e0c3e98e8 >[negroni] Completed 500 Internal Server Error in 159.589µs >[heketi] INFO 2018/06/08 07:49:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 07:49:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:49:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 25min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6684 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:49:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 07:49:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:49:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 25min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6500 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:49:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 07:49:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:49:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 23min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ4372 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:49:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 07:49:14 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 07:51:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 07:51:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:51:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 27min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6684 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:51:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 07:51:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:51:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 27min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6500 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:51:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 07:51:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:51:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 25min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ4372 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:51:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 07:51:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 202 Accepted in 15.137299ms >[asynchttp] INFO 2018/06/08 07:51:20 asynchttp.go:288: Started job b95907c995b6e365d88e25a41d21ca63 >[heketi] INFO 2018/06/08 07:51:20 Started async operation: Delete Volume >[negroni] Started GET /queue/b95907c995b6e365d88e25a41d21ca63 >[negroni] Completed 200 OK in 163.558µs >[kubeexec] DEBUG 2018/06/08 07:51:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c7d5f0f473cce6914804135f0b8ddcd --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c7d5f0f473cce6914804135f0b8ddcd) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 07:51:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c7d5f0f473cce6914804135f0b8ddcd force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 07:51:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[kubeexec] ERROR 2018/06/08 07:51:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c7d5f0f473cce6914804135f0b8ddcd] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 07:51:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:51:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:51:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:51:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[asynchttp] INFO 2018/06/08 07:51:21 asynchttp.go:292: Completed job b95907c995b6e365d88e25a41d21ca63 in 722.905951ms >[negroni] Started GET /queue/b95907c995b6e365d88e25a41d21ca63 >[negroni] Completed 500 Internal Server Error in 136.323µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.692182ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.013983ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 546.015µs >[negroni] Started GET /volumes/ce1dbe38a55233b6023eaf7f32109115 >[negroni] Completed 200 OK in 723.741µs >[heketi] INFO 2018/06/08 07:53:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 07:53:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:53:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 29min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6684 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:53:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 07:53:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:53:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 29min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6500 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:53:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 07:53:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:53:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 27min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ4372 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:53:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 07:53:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 202 Accepted in 10.184128ms >[asynchttp] INFO 2018/06/08 07:53:35 asynchttp.go:288: Started job ca499bed75730c5d49dfa3c30b8bfb4d >[heketi] INFO 2018/06/08 07:53:35 Started async operation: Delete Volume >[negroni] Started GET /queue/ca499bed75730c5d49dfa3c30b8bfb4d >[negroni] Completed 200 OK in 199.139µs >[kubeexec] DEBUG 2018/06/08 07:53:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c7d5f0f473cce6914804135f0b8ddcd --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c7d5f0f473cce6914804135f0b8ddcd) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 07:53:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c7d5f0f473cce6914804135f0b8ddcd force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 07:53:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[kubeexec] ERROR 2018/06/08 07:53:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c7d5f0f473cce6914804135f0b8ddcd] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 07:53:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:53:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:53:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:53:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[asynchttp] INFO 2018/06/08 07:53:36 asynchttp.go:292: Completed job ca499bed75730c5d49dfa3c30b8bfb4d in 766.156461ms >[negroni] Started GET /queue/ca499bed75730c5d49dfa3c30b8bfb4d >[negroni] Completed 500 Internal Server Error in 136.705µs >[heketi] INFO 2018/06/08 07:55:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 07:55:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:55:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 31min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6684 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:55:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 07:55:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:55:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 31min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6500 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:55:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 07:55:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:55:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 29min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ4372 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:55:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 07:55:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 202 Accepted in 11.759497ms >[asynchttp] INFO 2018/06/08 07:55:50 asynchttp.go:288: Started job a965fa1cad76deaabfae107bc4fa7b61 >[heketi] INFO 2018/06/08 07:55:50 Started async operation: Delete Volume >[negroni] Started GET /queue/a965fa1cad76deaabfae107bc4fa7b61 >[negroni] Completed 200 OK in 123.67µs >[kubeexec] DEBUG 2018/06/08 07:55:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c7d5f0f473cce6914804135f0b8ddcd --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c7d5f0f473cce6914804135f0b8ddcd) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 07:55:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c7d5f0f473cce6914804135f0b8ddcd force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 07:55:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[kubeexec] ERROR 2018/06/08 07:55:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c7d5f0f473cce6914804135f0b8ddcd] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 07:55:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:55:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:55:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[asynchttp] INFO 2018/06/08 07:55:51 asynchttp.go:292: Completed job a965fa1cad76deaabfae107bc4fa7b61 in 966.874938ms >[heketi] ERROR 2018/06/08 07:55:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[negroni] Started GET /queue/a965fa1cad76deaabfae107bc4fa7b61 >[negroni] Completed 500 Internal Server Error in 190.476µs >[heketi] INFO 2018/06/08 07:57:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 07:57:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:57:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 33min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6684 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:57:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 07:57:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:57:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 33min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6500 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:57:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 07:57:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:57:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 31min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ4372 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:57:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 07:57:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 202 Accepted in 10.438114ms >[asynchttp] INFO 2018/06/08 07:57:53 asynchttp.go:288: Started job 2440346aa2f29a060d86e6947457eae1 >[heketi] INFO 2018/06/08 07:57:53 Started async operation: Delete Volume >[negroni] Started GET /queue/2440346aa2f29a060d86e6947457eae1 >[negroni] Completed 200 OK in 132.135µs >[kubeexec] DEBUG 2018/06/08 07:57:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c7d5f0f473cce6914804135f0b8ddcd --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c7d5f0f473cce6914804135f0b8ddcd) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 07:57:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c7d5f0f473cce6914804135f0b8ddcd force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 07:57:54 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[kubeexec] ERROR 2018/06/08 07:57:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c7d5f0f473cce6914804135f0b8ddcd] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 07:57:54 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:57:54 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 07:57:54 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[asynchttp] INFO 2018/06/08 07:57:54 asynchttp.go:292: Completed job 2440346aa2f29a060d86e6947457eae1 in 736.716267ms >[heketi] ERROR 2018/06/08 07:57:54 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[negroni] Started GET /queue/2440346aa2f29a060d86e6947457eae1 >[negroni] Completed 500 Internal Server Error in 234.005µs >[heketi] INFO 2018/06/08 07:59:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 07:59:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:59:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 35min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6684 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:59:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 07:59:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:59:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 35min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6500 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:59:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 07:59:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 07:59:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 33min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ4372 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 07:59:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 07:59:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 202 Accepted in 13.008059ms >[asynchttp] INFO 2018/06/08 08:00:05 asynchttp.go:288: Started job 44604b026004cedca0e24a37c3c85d1d >[heketi] INFO 2018/06/08 08:00:05 Started async operation: Delete Volume >[negroni] Started GET /queue/44604b026004cedca0e24a37c3c85d1d >[negroni] Completed 200 OK in 130.721µs >[kubeexec] DEBUG 2018/06/08 08:00:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c7d5f0f473cce6914804135f0b8ddcd --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c7d5f0f473cce6914804135f0b8ddcd) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 08:00:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c7d5f0f473cce6914804135f0b8ddcd force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 08:00:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[kubeexec] ERROR 2018/06/08 08:00:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c7d5f0f473cce6914804135f0b8ddcd] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 08:00:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 08:00:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 08:00:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 08:00:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[asynchttp] INFO 2018/06/08 08:00:06 asynchttp.go:292: Completed job 44604b026004cedca0e24a37c3c85d1d in 705.536523ms >[negroni] Started GET /queue/44604b026004cedca0e24a37c3c85d1d >[negroni] Completed 500 Internal Server Error in 168.756µs >[heketi] INFO 2018/06/08 08:01:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:01:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:01:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 37min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6684 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:01:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:01:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:01:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 37min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6500 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:01:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:01:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:01:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 35min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ4372 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:01:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:01:14 Cleaned 0 nodes from health cache >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.137353ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.542438ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.070897ms >[negroni] Started GET /volumes/ce1dbe38a55233b6023eaf7f32109115 >[negroni] Completed 200 OK in 931.32µs >[negroni] Started DELETE /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 202 Accepted in 13.344579ms >[asynchttp] INFO 2018/06/08 08:02:20 asynchttp.go:288: Started job 73d6a98d4b88a1e1e5aecd323e54ea99 >[heketi] INFO 2018/06/08 08:02:20 Started async operation: Delete Volume >[negroni] Started GET /queue/73d6a98d4b88a1e1e5aecd323e54ea99 >[negroni] Completed 200 OK in 133.641µs >[kubeexec] DEBUG 2018/06/08 08:02:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c7d5f0f473cce6914804135f0b8ddcd --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c7d5f0f473cce6914804135f0b8ddcd) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 08:02:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c7d5f0f473cce6914804135f0b8ddcd force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 08:02:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[kubeexec] ERROR 2018/06/08 08:02:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c7d5f0f473cce6914804135f0b8ddcd] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 08:02:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 08:02:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 08:02:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 08:02:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[asynchttp] INFO 2018/06/08 08:02:21 asynchttp.go:292: Completed job 73d6a98d4b88a1e1e5aecd323e54ea99 in 726.731787ms >[negroni] Started GET /queue/73d6a98d4b88a1e1e5aecd323e54ea99 >[negroni] Completed 500 Internal Server Error in 194.858µs >[heketi] INFO 2018/06/08 08:03:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:03:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:03:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 39min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6684 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:03:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:03:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:03:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 39min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6500 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:03:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:03:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:03:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 37min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ4372 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:03:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:03:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 202 Accepted in 10.883657ms >[asynchttp] INFO 2018/06/08 08:04:35 asynchttp.go:288: Started job 9980b8714b71143cdf92265379393a9b >[heketi] INFO 2018/06/08 08:04:35 Started async operation: Delete Volume >[negroni] Started GET /queue/9980b8714b71143cdf92265379393a9b >[negroni] Completed 200 OK in 178.262µs >[kubeexec] DEBUG 2018/06/08 08:04:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c7d5f0f473cce6914804135f0b8ddcd --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c7d5f0f473cce6914804135f0b8ddcd) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 08:04:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c7d5f0f473cce6914804135f0b8ddcd force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 08:04:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[kubeexec] ERROR 2018/06/08 08:04:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c7d5f0f473cce6914804135f0b8ddcd] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 08:04:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 08:04:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 08:04:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[asynchttp] INFO 2018/06/08 08:04:36 asynchttp.go:292: Completed job 9980b8714b71143cdf92265379393a9b in 799.612198ms >[heketi] ERROR 2018/06/08 08:04:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[negroni] Started GET /queue/9980b8714b71143cdf92265379393a9b >[negroni] Completed 500 Internal Server Error in 198.788µs >[heketi] INFO 2018/06/08 08:05:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:05:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:05:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 41min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6684 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:05:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:05:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:05:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 41min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6500 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:05:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:05:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:05:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 39min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ4372 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:05:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:05:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 202 Accepted in 15.146222ms >[asynchttp] INFO 2018/06/08 08:06:50 asynchttp.go:288: Started job 80fcb26a5e686e710fcc63cfb55397ab >[heketi] INFO 2018/06/08 08:06:50 Started async operation: Delete Volume >[negroni] Started GET /queue/80fcb26a5e686e710fcc63cfb55397ab >[negroni] Completed 200 OK in 144.774µs >[kubeexec] DEBUG 2018/06/08 08:06:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c7d5f0f473cce6914804135f0b8ddcd --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c7d5f0f473cce6914804135f0b8ddcd) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 08:06:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c7d5f0f473cce6914804135f0b8ddcd force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 08:06:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[negroni] Started GET /queue/80fcb26a5e686e710fcc63cfb55397ab >[negroni] Completed 200 OK in 117.128µs >[kubeexec] ERROR 2018/06/08 08:06:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c7d5f0f473cce6914804135f0b8ddcd] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 08:06:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 08:06:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 08:06:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 08:06:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[asynchttp] INFO 2018/06/08 08:06:51 asynchttp.go:292: Completed job 80fcb26a5e686e710fcc63cfb55397ab in 1.054589539s >[negroni] Started GET /queue/80fcb26a5e686e710fcc63cfb55397ab >[negroni] Completed 500 Internal Server Error in 179.595µs >[heketi] INFO 2018/06/08 08:07:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:07:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:07:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 43min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6684 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:07:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:07:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:07:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 43min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ6500 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:07:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:07:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:07:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 41min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ4372 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:07:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:07:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/ce1dbe38a55233b6023eaf7f32109115 >[negroni] Completed 202 Accepted in 9.472215ms >[asynchttp] INFO 2018/06/08 08:07:25 asynchttp.go:288: Started job d89e7c5134f900ef5b3960b9b88d8e39 >[heketi] INFO 2018/06/08 08:07:25 Started async operation: Delete Volume >[negroni] Started GET /queue/d89e7c5134f900ef5b3960b9b88d8e39 >[negroni] Completed 200 OK in 130.307µs >[kubeexec] DEBUG 2018/06/08 08:07:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script snapshot list cns-vol_glusterfs_mongodb5_06f1079a-6af0-11e8-ab19-005056a5f18a --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/d89e7c5134f900ef5b3960b9b88d8e39 >[negroni] Completed 200 OK in 174.306µs >[negroni] Started GET /queue/d89e7c5134f900ef5b3960b9b88d8e39 >[negroni] Completed 200 OK in 158.175µs >[kubeexec] DEBUG 2018/06/08 08:07:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume stop cns-vol_glusterfs_mongodb5_06f1079a-6af0-11e8-ab19-005056a5f18a force >Result: volume stop: cns-vol_glusterfs_mongodb5_06f1079a-6af0-11e8-ab19-005056a5f18a: success >[kubeexec] DEBUG 2018/06/08 08:07:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume delete cns-vol_glusterfs_mongodb5_06f1079a-6af0-11e8-ab19-005056a5f18a >Result: volume delete: cns-vol_glusterfs_mongodb5_06f1079a-6af0-11e8-ab19-005056a5f18a: success >[heketi] INFO 2018/06/08 08:07:28 Deleting brick 43ffab10bdb18f3bcce61f5e0c04684f >[heketi] INFO 2018/06/08 08:07:28 Deleting brick fea99506a7e983d4d4765dc1d2462625 >[heketi] INFO 2018/06/08 08:07:28 Deleting brick b55f73cf47bf2a1fcb341a2e01fd79db >[kubeexec] DEBUG 2018/06/08 08:07:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_fea99506a7e983d4d4765dc1d2462625 | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_fea99506a7e983d4d4765dc1d2462625 >[kubeexec] DEBUG 2018/06/08 08:07:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_43ffab10bdb18f3bcce61f5e0c04684f | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_43ffab10bdb18f3bcce61f5e0c04684f >[kubeexec] DEBUG 2018/06/08 08:07:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b55f73cf47bf2a1fcb341a2e01fd79db | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_b55f73cf47bf2a1fcb341a2e01fd79db >[negroni] Started GET /queue/d89e7c5134f900ef5b3960b9b88d8e39 >[negroni] Completed 200 OK in 105.121µs >[kubeexec] DEBUG 2018/06/08 08:07:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_fea99506a7e983d4d4765dc1d2462625 > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_fea99506a7e983d4d4765dc1d2462625 >[kubeexec] DEBUG 2018/06/08 08:07:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_43ffab10bdb18f3bcce61f5e0c04684f > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_43ffab10bdb18f3bcce61f5e0c04684f >[kubeexec] DEBUG 2018/06/08 08:07:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_b55f73cf47bf2a1fcb341a2e01fd79db > >Result: vg_3a4297677881963e3f80124971d50eea/tp_b55f73cf47bf2a1fcb341a2e01fd79db >[negroni] Started GET /queue/d89e7c5134f900ef5b3960b9b88d8e39 >[negroni] Completed 200 OK in 89.225µs >[kubeexec] DEBUG 2018/06/08 08:07:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_fea99506a7e983d4d4765dc1d2462625 >Result: >[kubeexec] DEBUG 2018/06/08 08:07:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_43ffab10bdb18f3bcce61f5e0c04684f >Result: >[kubeexec] DEBUG 2018/06/08 08:07:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b55f73cf47bf2a1fcb341a2e01fd79db >Result: >[negroni] Started GET /queue/d89e7c5134f900ef5b3960b9b88d8e39 >[negroni] Completed 200 OK in 99.773µs >[kubeexec] DEBUG 2018/06/08 08:07:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_fea99506a7e983d4d4765dc1d2462625/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:07:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_43ffab10bdb18f3bcce61f5e0c04684f/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:07:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_b55f73cf47bf2a1fcb341a2e01fd79db/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/d89e7c5134f900ef5b3960b9b88d8e39 >[negroni] Completed 200 OK in 105.204µs >[kubeexec] DEBUG 2018/06/08 08:07:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_fea99506a7e983d4d4765dc1d2462625 > >Result: Logical volume "brick_fea99506a7e983d4d4765dc1d2462625" successfully removed >[kubeexec] DEBUG 2018/06/08 08:07:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_43ffab10bdb18f3bcce61f5e0c04684f > >Result: Logical volume "brick_43ffab10bdb18f3bcce61f5e0c04684f" successfully removed >[kubeexec] DEBUG 2018/06/08 08:07:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_b55f73cf47bf2a1fcb341a2e01fd79db > >Result: Logical volume "brick_b55f73cf47bf2a1fcb341a2e01fd79db" successfully removed >[negroni] Started GET /queue/d89e7c5134f900ef5b3960b9b88d8e39 >[negroni] Completed 200 OK in 206.335µs >[kubeexec] DEBUG 2018/06/08 08:07:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_fea99506a7e983d4d4765dc1d2462625 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:07:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_43ffab10bdb18f3bcce61f5e0c04684f > >Result: 0 >[negroni] Started GET /queue/d89e7c5134f900ef5b3960b9b88d8e39 >[negroni] Completed 200 OK in 95.455µs >[kubeexec] DEBUG 2018/06/08 08:07:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_b55f73cf47bf2a1fcb341a2e01fd79db > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:07:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_fea99506a7e983d4d4765dc1d2462625 > >Result: Logical volume "tp_fea99506a7e983d4d4765dc1d2462625" successfully removed >[negroni] Started GET /queue/d89e7c5134f900ef5b3960b9b88d8e39 >[negroni] Completed 200 OK in 137.179µs >[kubeexec] DEBUG 2018/06/08 08:07:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_43ffab10bdb18f3bcce61f5e0c04684f > >Result: Logical volume "tp_43ffab10bdb18f3bcce61f5e0c04684f" successfully removed >[kubeexec] DEBUG 2018/06/08 08:07:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_b55f73cf47bf2a1fcb341a2e01fd79db > >Result: Logical volume "tp_b55f73cf47bf2a1fcb341a2e01fd79db" successfully removed >[negroni] Started GET /queue/d89e7c5134f900ef5b3960b9b88d8e39 >[negroni] Completed 200 OK in 92.652µs >[kubeexec] DEBUG 2018/06/08 08:07:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_fea99506a7e983d4d4765dc1d2462625 >Result: >[kubeexec] DEBUG 2018/06/08 08:07:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_43ffab10bdb18f3bcce61f5e0c04684f >Result: >[kubeexec] DEBUG 2018/06/08 08:07:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b55f73cf47bf2a1fcb341a2e01fd79db >Result: >[heketi] INFO 2018/06/08 08:07:36 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:07:36 asynchttp.go:292: Completed job d89e7c5134f900ef5b3960b9b88d8e39 in 10.534818093s >[negroni] Started GET /queue/d89e7c5134f900ef5b3960b9b88d8e39 >[negroni] Completed 204 No Content in 141.43µs >[heketi] INFO 2018/06/08 08:09:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:09:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:09:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 45min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7384 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:09:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:09:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:09:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 45min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7156 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:09:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:09:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:09:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 43min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5492 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:09:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:09:14 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 08:11:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:11:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:11:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 47min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7384 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:11:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:11:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:11:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 47min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7156 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:11:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:11:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:11:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 45min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5492 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:11:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:11:14 Cleaned 0 nodes from health cache >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:12:58 Allocating brick set #0 >[negroni] Completed 202 Accepted in 12.839476ms >[asynchttp] INFO 2018/06/08 08:12:58 asynchttp.go:288: Started job 3424e17d02fffd5a382569865106ef7e >[heketi] INFO 2018/06/08 08:12:58 Started async operation: Create Volume >[negroni] Started GET /queue/3424e17d02fffd5a382569865106ef7e >[negroni] Completed 200 OK in 120.467µs >[heketi] INFO 2018/06/08 08:12:58 Creating brick 0e79afb5f11fb53b2e0ca8f673c07abb >[heketi] INFO 2018/06/08 08:12:58 Creating brick 3fc4d1412dc05e2bffb77dbc6fe5da99 >[heketi] INFO 2018/06/08 08:12:58 Creating brick ec22e3d2714797080dd7b0580ee72839 >[kubeexec] DEBUG 2018/06/08 08:12:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_0e79afb5f11fb53b2e0ca8f673c07abb >Result: >[kubeexec] DEBUG 2018/06/08 08:12:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_3fc4d1412dc05e2bffb77dbc6fe5da99 >Result: >[kubeexec] DEBUG 2018/06/08 08:12:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ec22e3d2714797080dd7b0580ee72839 >Result: >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_0e79afb5f11fb53b2e0ca8f673c07abb --virtualsize 10485760K --name brick_0e79afb5f11fb53b2e0ca8f673c07abb >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_0e79afb5f11fb53b2e0ca8f673c07abb" created. >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_3fc4d1412dc05e2bffb77dbc6fe5da99 --virtualsize 10485760K --name brick_3fc4d1412dc05e2bffb77dbc6fe5da99 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_3fc4d1412dc05e2bffb77dbc6fe5da99" created. >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_3a4297677881963e3f80124971d50eea/tp_ec22e3d2714797080dd7b0580ee72839 --virtualsize 10485760K --name brick_ec22e3d2714797080dd7b0580ee72839 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_ec22e3d2714797080dd7b0580ee72839" created. >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_0e79afb5f11fb53b2e0ca8f673c07abb >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_0e79afb5f11fb53b2e0ca8f673c07abb isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_3fc4d1412dc05e2bffb77dbc6fe5da99 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_3fc4d1412dc05e2bffb77dbc6fe5da99 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_ec22e3d2714797080dd7b0580ee72839 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_ec22e3d2714797080dd7b0580ee72839 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_0e79afb5f11fb53b2e0ca8f673c07abb /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_0e79afb5f11fb53b2e0ca8f673c07abb xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_3fc4d1412dc05e2bffb77dbc6fe5da99 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_3fc4d1412dc05e2bffb77dbc6fe5da99 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_ec22e3d2714797080dd7b0580ee72839 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ec22e3d2714797080dd7b0580ee72839 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_0e79afb5f11fb53b2e0ca8f673c07abb /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_0e79afb5f11fb53b2e0ca8f673c07abb >Result: >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_ec22e3d2714797080dd7b0580ee72839 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ec22e3d2714797080dd7b0580ee72839 >Result: >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_3fc4d1412dc05e2bffb77dbc6fe5da99 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_3fc4d1412dc05e2bffb77dbc6fe5da99 >Result: >[negroni] Started GET /queue/3424e17d02fffd5a382569865106ef7e >[negroni] Completed 200 OK in 119.959µs >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_0e79afb5f11fb53b2e0ca8f673c07abb/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ec22e3d2714797080dd7b0580ee72839/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_3fc4d1412dc05e2bffb77dbc6fe5da99/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2000 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_0e79afb5f11fb53b2e0ca8f673c07abb/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2000 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ec22e3d2714797080dd7b0580ee72839/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:12:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2000 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_3fc4d1412dc05e2bffb77dbc6fe5da99/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:13:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_0e79afb5f11fb53b2e0ca8f673c07abb/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:13:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ec22e3d2714797080dd7b0580ee72839/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:13:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_3fc4d1412dc05e2bffb77dbc6fe5da99/brick >Result: >[cmdexec] INFO 2018/06/08 08:13:00 Creating volume cns-vol_glusterfs_mongodb5_c14e78ae-6af3-11e8-ab19-005056a5f18a replica 3 >[kubeexec] DEBUG 2018/06/08 08:13:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume create cns-vol_glusterfs_mongodb5_c14e78ae-6af3-11e8-ab19-005056a5f18a replica 3 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_0e79afb5f11fb53b2e0ca8f673c07abb/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ec22e3d2714797080dd7b0580ee72839/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_3fc4d1412dc05e2bffb77dbc6fe5da99/brick >Result: volume create: cns-vol_glusterfs_mongodb5_c14e78ae-6af3-11e8-ab19-005056a5f18a: success: please start the volume to access data >[negroni] Started GET /queue/3424e17d02fffd5a382569865106ef7e >[negroni] Completed 200 OK in 119.53µs >[negroni] Started GET /queue/3424e17d02fffd5a382569865106ef7e >[negroni] Completed 200 OK in 117.531µs >[kubeexec] DEBUG 2018/06/08 08:13:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume start cns-vol_glusterfs_mongodb5_c14e78ae-6af3-11e8-ab19-005056a5f18a >Result: volume start: cns-vol_glusterfs_mongodb5_c14e78ae-6af3-11e8-ab19-005056a5f18a: success >[heketi] INFO 2018/06/08 08:13:02 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:13:02 asynchttp.go:292: Completed job 3424e17d02fffd5a382569865106ef7e in 3.784685973s >[negroni] Started GET /queue/3424e17d02fffd5a382569865106ef7e >[negroni] Completed 303 See Other in 120.891µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 2.188195ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 215.673µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 3.4443ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 710.118µs >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 625.354µs >[heketi] INFO 2018/06/08 08:13:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:13:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:13:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 49min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7786 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:13:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:13:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:13:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 49min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7491 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:13:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:13:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:13:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 47min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5842 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:13:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:13:14 Cleaned 0 nodes from health cache >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 170.677µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 659.525µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 620.787µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.104315ms >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 468.786µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 787.568µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.227488ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 707.572µs >[negroni] Started DELETE /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 202 Accepted in 8.0761ms >[asynchttp] INFO 2018/06/08 08:13:35 asynchttp.go:288: Started job 03238a47b86fc10f8b57b281e7ab0514 >[heketi] INFO 2018/06/08 08:13:35 Started async operation: Delete Volume >[negroni] Started GET /queue/03238a47b86fc10f8b57b281e7ab0514 >[negroni] Completed 200 OK in 98.323µs >[kubeexec] DEBUG 2018/06/08 08:13:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c7d5f0f473cce6914804135f0b8ddcd --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c7d5f0f473cce6914804135f0b8ddcd) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 08:13:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c7d5f0f473cce6914804135f0b8ddcd force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 08:13:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[kubeexec] ERROR 2018/06/08 08:13:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c7d5f0f473cce6914804135f0b8ddcd] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >] >[cmdexec] ERROR 2018/06/08 08:13:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 08:13:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 08:13:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[heketi] ERROR 2018/06/08 08:13:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c7d5f0f473cce6914804135f0b8ddcd: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c7d5f0f473cce6914804135f0b8ddcd: failed: Volume vol_9c7d5f0f473cce6914804135f0b8ddcd does not exist >[asynchttp] INFO 2018/06/08 08:13:36 asynchttp.go:292: Completed job 03238a47b86fc10f8b57b281e7ab0514 in 650.80344ms >[negroni] Started GET /queue/03238a47b86fc10f8b57b281e7ab0514 >[negroni] Completed 500 Internal Server Error in 183.468µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 2.081463ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.576195ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.114368ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.178338ms >[heketi] INFO 2018/06/08 08:15:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:15:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:15:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 51min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7786 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:15:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:15:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:15:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 51min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7491 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:15:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:15:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:15:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 49min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5842 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:15:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:15:14 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 08:17:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:17:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:17:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 53min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7786 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:17:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:17:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:17:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 53min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7491 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:17:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:17:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:17:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 51min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5842 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:17:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:17:14 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 08:19:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:19:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:19:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 55min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7786 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:19:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:19:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:19:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 55min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7491 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:19:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:19:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:19:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 53min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5842 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:19:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:19:14 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 08:21:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:21:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:21:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 57min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7786 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:21:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:21:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:21:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 57min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7491 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:21:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:21:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:21:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 55min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5842 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:21:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:21:14 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 08:23:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:23:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:23:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 1h 59min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7786 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:23:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:23:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:23:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 1h 59min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7491 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:23:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:23:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:23:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 57min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5842 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:23:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:23:14 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 08:25:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:25:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:25:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 1min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7786 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:25:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:25:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:25:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 1min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7491 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:25:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:25:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:25:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 59min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5842 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:25:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:25:14 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 08:27:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:27:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:27:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 3min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7786 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:27:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:27:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:27:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 3min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7491 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:27:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:27:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:27:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 1min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5842 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:27:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:27:14 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 08:29:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:29:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:29:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 5min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7786 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:29:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:29:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:29:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 5min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7491 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:29:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:29:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:29:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 3min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5842 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:29:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:29:14 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 08:31:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:31:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:31:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 7min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7786 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:31:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:31:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:31:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 7min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7491 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:31:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:31:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:31:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 5min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5842 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:31:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:31:14 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 08:33:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:33:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:33:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 9min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7786 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:33:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:33:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:33:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 9min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7491 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:33:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:33:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:33:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 7min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ5842 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:33:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:33:14 Cleaned 0 nodes from health cache >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:33:33 Allocating brick set #0 >[negroni] Completed 202 Accepted in 14.35258ms >[asynchttp] INFO 2018/06/08 08:33:33 asynchttp.go:288: Started job a6b906276aeeaaac96c8168db7c988bb >[heketi] INFO 2018/06/08 08:33:33 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:33:33 Creating brick ce0cbbac0ac897ad4fe0a2d0068195ae >[negroni] Started GET /queue/a6b906276aeeaaac96c8168db7c988bb >[negroni] Completed 200 OK in 129.321µs >[heketi] INFO 2018/06/08 08:33:33 Creating brick 989fc70ef70fd18a678f7fc65ac3a56d >[heketi] INFO 2018/06/08 08:33:33 Creating brick 53eb288b3c12a575d4cbef2bce0292ef >[kubeexec] DEBUG 2018/06/08 08:33:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ce0cbbac0ac897ad4fe0a2d0068195ae >Result: >[kubeexec] DEBUG 2018/06/08 08:33:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_53eb288b3c12a575d4cbef2bce0292ef >Result: >[kubeexec] DEBUG 2018/06/08 08:33:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_989fc70ef70fd18a678f7fc65ac3a56d >Result: >[kubeexec] DEBUG 2018/06/08 08:33:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_d389f0278a774bd7443a09af960961d8/tp_53eb288b3c12a575d4cbef2bce0292ef --virtualsize 1048576K --name brick_53eb288b3c12a575d4cbef2bce0292ef >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_53eb288b3c12a575d4cbef2bce0292ef" created. >[kubeexec] DEBUG 2018/06/08 08:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_3a4297677881963e3f80124971d50eea/tp_ce0cbbac0ac897ad4fe0a2d0068195ae --virtualsize 1048576K --name brick_ce0cbbac0ac897ad4fe0a2d0068195ae >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_ce0cbbac0ac897ad4fe0a2d0068195ae" created. >[kubeexec] DEBUG 2018/06/08 08:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_989fc70ef70fd18a678f7fc65ac3a56d --virtualsize 1048576K --name brick_989fc70ef70fd18a678f7fc65ac3a56d >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_989fc70ef70fd18a678f7fc65ac3a56d" created. >[kubeexec] DEBUG 2018/06/08 08:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_53eb288b3c12a575d4cbef2bce0292ef >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_53eb288b3c12a575d4cbef2bce0292ef isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_989fc70ef70fd18a678f7fc65ac3a56d >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_989fc70ef70fd18a678f7fc65ac3a56d isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_ce0cbbac0ac897ad4fe0a2d0068195ae >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_ce0cbbac0ac897ad4fe0a2d0068195ae isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_53eb288b3c12a575d4cbef2bce0292ef /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_53eb288b3c12a575d4cbef2bce0292ef xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_ce0cbbac0ac897ad4fe0a2d0068195ae /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ce0cbbac0ac897ad4fe0a2d0068195ae xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_989fc70ef70fd18a678f7fc65ac3a56d /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_989fc70ef70fd18a678f7fc65ac3a56d xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_53eb288b3c12a575d4cbef2bce0292ef /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_53eb288b3c12a575d4cbef2bce0292ef >Result: >[negroni] Started GET /queue/a6b906276aeeaaac96c8168db7c988bb >[negroni] Completed 200 OK in 136.298µs >[kubeexec] DEBUG 2018/06/08 08:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_989fc70ef70fd18a678f7fc65ac3a56d /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_989fc70ef70fd18a678f7fc65ac3a56d >Result: >[kubeexec] DEBUG 2018/06/08 08:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_ce0cbbac0ac897ad4fe0a2d0068195ae /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ce0cbbac0ac897ad4fe0a2d0068195ae >Result: >[kubeexec] DEBUG 2018/06/08 08:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_53eb288b3c12a575d4cbef2bce0292ef/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ce0cbbac0ac897ad4fe0a2d0068195ae/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_989fc70ef70fd18a678f7fc65ac3a56d/brick >Result: >[cmdexec] INFO 2018/06/08 08:33:34 Creating volume vol_793f2fea1ecd5540c6c7b9011fef2fef replica 3 >[kubeexec] DEBUG 2018/06/08 08:33:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_793f2fea1ecd5540c6c7b9011fef2fef replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ce0cbbac0ac897ad4fe0a2d0068195ae/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_989fc70ef70fd18a678f7fc65ac3a56d/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_53eb288b3c12a575d4cbef2bce0292ef/brick >Result: volume create: vol_793f2fea1ecd5540c6c7b9011fef2fef: success: please start the volume to access data >[negroni] Started GET /queue/a6b906276aeeaaac96c8168db7c988bb >[negroni] Completed 200 OK in 228.369µs >[negroni] Started GET /queue/a6b906276aeeaaac96c8168db7c988bb >[negroni] Completed 200 OK in 177.846µs >[negroni] Started GET /queue/a6b906276aeeaaac96c8168db7c988bb >[negroni] Completed 200 OK in 165.067µs >[negroni] Started GET /queue/a6b906276aeeaaac96c8168db7c988bb >[negroni] Completed 200 OK in 245.969µs >[kubeexec] DEBUG 2018/06/08 08:33:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_793f2fea1ecd5540c6c7b9011fef2fef >Result: volume start: vol_793f2fea1ecd5540c6c7b9011fef2fef: success >[heketi] INFO 2018/06/08 08:33:39 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:33:39 asynchttp.go:292: Completed job a6b906276aeeaaac96c8168db7c988bb in 5.672020936s >[negroni] Started GET /queue/a6b906276aeeaaac96c8168db7c988bb >[negroni] Completed 303 See Other in 281.441µs >[negroni] Started GET /volumes/793f2fea1ecd5540c6c7b9011fef2fef >[negroni] Completed 200 OK in 4.281862ms >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 208.003µs >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 280.271µs >[negroni] Started GET /volumes/793f2fea1ecd5540c6c7b9011fef2fef >[negroni] Completed 200 OK in 987.406µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 628.178µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.607917ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.025898ms >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 1.30019ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 977.048µs >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 819.679µs >[negroni] Started POST /devices >[heketi] INFO 2018/06/08 08:33:39 Adding device /dev/sdf to node 70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 202 Accepted in 10.38855ms >[asynchttp] INFO 2018/06/08 08:33:39 asynchttp.go:288: Started job fb0fdb6d6bfe1bc1e72f1b72d34e734c >[negroni] Started GET /queue/fb0fdb6d6bfe1bc1e72f1b72d34e734c >[negroni] Completed 200 OK in 111.967µs >[kubeexec] DEBUG 2018/06/08 08:33:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: pvcreate --metadatasize=128M --dataalignment=256K '/dev/sdf' >Result: Physical volume "/dev/sdf" successfully created. >[kubeexec] DEBUG 2018/06/08 08:33:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: vgcreate --autobackup=n vg_b6293dbff56320d01ba8a795f86a2b5f /dev/sdf >Result: Volume group "vg_b6293dbff56320d01ba8a795f86a2b5f" successfully created >[kubeexec] DEBUG 2018/06/08 08:33:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: vgdisplay -c vg_b6293dbff56320d01ba8a795f86a2b5f >Result: vg_b6293dbff56320d01ba8a795f86a2b5f:r/w:772:-1:0:0:0:-1:0:1:1:104722432:4096:25567:0:25567:OMEJM3-b3Xx-GZC1-dQSj-WHSM-XDDc-QCeiaG >[cmdexec] DEBUG 2018/06/08 08:33:40 /src/github.com/heketi/heketi/executors/cmdexec/device.go:147: Size of /dev/sdf in dhcp46-122.lab.eng.blr.redhat.com is 104722432 >[heketi] INFO 2018/06/08 08:33:40 Added device /dev/sdf >[asynchttp] INFO 2018/06/08 08:33:40 asynchttp.go:292: Completed job fb0fdb6d6bfe1bc1e72f1b72d34e734c in 653.134034ms >[negroni] Started GET /queue/fb0fdb6d6bfe1bc1e72f1b72d34e734c >[negroni] Completed 204 No Content in 182.131µs >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 1.441963ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 316.841µs >[negroni] Started GET /volumes/793f2fea1ecd5540c6c7b9011fef2fef >[negroni] Completed 200 OK in 1.492479ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 966.136µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 764.237µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 656.167µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 992.265µs >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 958.383µs >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 726.622µs >[negroni] Started POST /devices/b6293dbff56320d01ba8a795f86a2b5f/state >[negroni] Completed 202 Accepted in 485.924µs >[asynchttp] INFO 2018/06/08 08:33:41 asynchttp.go:288: Started job a857904fe3a2711216c5813f9db75cba >[negroni] Started GET /queue/a857904fe3a2711216c5813f9db75cba >[negroni] Completed 200 OK in 115.85µs >[asynchttp] INFO 2018/06/08 08:33:41 asynchttp.go:292: Completed job a857904fe3a2711216c5813f9db75cba in 7.634885ms >[negroni] Started GET /queue/a857904fe3a2711216c5813f9db75cba >[negroni] Completed 204 No Content in 154.032µs >[negroni] Started POST /devices/b6293dbff56320d01ba8a795f86a2b5f/state >[negroni] Completed 202 Accepted in 1.693803ms >[asynchttp] INFO 2018/06/08 08:33:42 asynchttp.go:288: Started job 310142380ed00bede5e869adf23a8b64 >[heketi] INFO 2018/06/08 08:33:42 Running Remove Device >[negroni] Started GET /queue/310142380ed00bede5e869adf23a8b64 >[negroni] Completed 200 OK in 243.504µs >[asynchttp] INFO 2018/06/08 08:33:42 asynchttp.go:292: Completed job 310142380ed00bede5e869adf23a8b64 in 9.18147ms >[negroni] Started GET /queue/310142380ed00bede5e869adf23a8b64 >[negroni] Completed 204 No Content in 158.279µs >[negroni] Started DELETE /devices/b6293dbff56320d01ba8a795f86a2b5f >[heketi] INFO 2018/06/08 08:33:43 Deleting device b6293dbff56320d01ba8a795f86a2b5f on node 70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 202 Accepted in 1.400707ms >[asynchttp] INFO 2018/06/08 08:33:43 asynchttp.go:288: Started job d1bc4ee485675b71a499e0e34583b5d4 >[negroni] Started GET /queue/d1bc4ee485675b71a499e0e34583b5d4 >[negroni] Completed 200 OK in 104.215µs >[kubeexec] DEBUG 2018/06/08 08:33:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: vgremove vg_b6293dbff56320d01ba8a795f86a2b5f >Result: Volume group "vg_b6293dbff56320d01ba8a795f86a2b5f" successfully removed >[kubeexec] DEBUG 2018/06/08 08:33:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: pvremove '/dev/sdf' >Result: Labels on physical volume "/dev/sdf" successfully wiped. >[kubeexec] ERROR 2018/06/08 08:33:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [ls /var/lib/heketi/mounts/vg_b6293dbff56320d01ba8a795f86a2b5f] on glusterfs-storage-pg4xc: Err[command terminated with exit code 2]: Stdout []: Stderr [ls: cannot access /var/lib/heketi/mounts/vg_b6293dbff56320d01ba8a795f86a2b5f: No such file or directory >] >[heketi] INFO 2018/06/08 08:33:43 Deleted node [b6293dbff56320d01ba8a795f86a2b5f] >[asynchttp] INFO 2018/06/08 08:33:43 asynchttp.go:292: Completed job d1bc4ee485675b71a499e0e34583b5d4 in 518.866636ms >[negroni] Started GET /queue/d1bc4ee485675b71a499e0e34583b5d4 >[negroni] Completed 204 No Content in 150.003µs >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 1.93406ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 505.786µs >[negroni] Started GET /volumes/793f2fea1ecd5540c6c7b9011fef2fef >[negroni] Completed 200 OK in 1.884836ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.495029ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.191414ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 675.798µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 1.880039ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.535855ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 1.264264ms >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:33:44 Allocating brick set #0 >[negroni] Completed 202 Accepted in 18.969419ms >[asynchttp] INFO 2018/06/08 08:33:44 asynchttp.go:288: Started job b94ff76af50977cdc0c77fea87a14955 >[heketi] INFO 2018/06/08 08:33:44 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:33:44 Creating brick 0f1d69bf8eee0f41a939f35c5a4e36e8 >[heketi] INFO 2018/06/08 08:33:44 Creating brick d8e106a809d43a339c3db048d0b81831 >[heketi] INFO 2018/06/08 08:33:44 Creating brick 2fdd0b36617fe22a539bf2bb950d7a8c >[negroni] Started GET /queue/b94ff76af50977cdc0c77fea87a14955 >[negroni] Completed 200 OK in 138.224µs >[kubeexec] DEBUG 2018/06/08 08:33:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0f1d69bf8eee0f41a939f35c5a4e36e8 >Result: >[kubeexec] DEBUG 2018/06/08 08:33:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d8e106a809d43a339c3db048d0b81831 >Result: >[kubeexec] DEBUG 2018/06/08 08:33:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_2fdd0b36617fe22a539bf2bb950d7a8c >Result: >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_0f1d69bf8eee0f41a939f35c5a4e36e8 --virtualsize 1048576K --name brick_0f1d69bf8eee0f41a939f35c5a4e36e8 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_0f1d69bf8eee0f41a939f35c5a4e36e8" created. >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_d8e106a809d43a339c3db048d0b81831 --virtualsize 1048576K --name brick_d8e106a809d43a339c3db048d0b81831 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_d8e106a809d43a339c3db048d0b81831" created. >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_2fdd0b36617fe22a539bf2bb950d7a8c --virtualsize 1048576K --name brick_2fdd0b36617fe22a539bf2bb950d7a8c >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_2fdd0b36617fe22a539bf2bb950d7a8c" created. >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_0f1d69bf8eee0f41a939f35c5a4e36e8 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_0f1d69bf8eee0f41a939f35c5a4e36e8 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d8e106a809d43a339c3db048d0b81831 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d8e106a809d43a339c3db048d0b81831 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_0f1d69bf8eee0f41a939f35c5a4e36e8 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0f1d69bf8eee0f41a939f35c5a4e36e8 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d8e106a809d43a339c3db048d0b81831 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d8e106a809d43a339c3db048d0b81831 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_2fdd0b36617fe22a539bf2bb950d7a8c >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_2fdd0b36617fe22a539bf2bb950d7a8c isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_2fdd0b36617fe22a539bf2bb950d7a8c /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_2fdd0b36617fe22a539bf2bb950d7a8c xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d8e106a809d43a339c3db048d0b81831 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d8e106a809d43a339c3db048d0b81831 >Result: >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_0f1d69bf8eee0f41a939f35c5a4e36e8 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0f1d69bf8eee0f41a939f35c5a4e36e8 >Result: >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d8e106a809d43a339c3db048d0b81831/brick >Result: >[negroni] Started GET /queue/b94ff76af50977cdc0c77fea87a14955 >[negroni] Completed 200 OK in 134.575µs >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_2fdd0b36617fe22a539bf2bb950d7a8c /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_2fdd0b36617fe22a539bf2bb950d7a8c >Result: >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0f1d69bf8eee0f41a939f35c5a4e36e8/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_2fdd0b36617fe22a539bf2bb950d7a8c/brick >Result: >[cmdexec] INFO 2018/06/08 08:33:45 Creating volume vol_e2b01dc5bde10ca95a8fa2b92a019ca3 replica 3 >[kubeexec] DEBUG 2018/06/08 08:33:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_e2b01dc5bde10ca95a8fa2b92a019ca3 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0f1d69bf8eee0f41a939f35c5a4e36e8/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_2fdd0b36617fe22a539bf2bb950d7a8c/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d8e106a809d43a339c3db048d0b81831/brick >Result: volume create: vol_e2b01dc5bde10ca95a8fa2b92a019ca3: success: please start the volume to access data >[negroni] Started GET /queue/b94ff76af50977cdc0c77fea87a14955 >[negroni] Completed 200 OK in 155.287µs >[negroni] Started GET /queue/b94ff76af50977cdc0c77fea87a14955 >[negroni] Completed 200 OK in 271.082µs >[negroni] Started GET /queue/b94ff76af50977cdc0c77fea87a14955 >[negroni] Completed 200 OK in 143.62µs >[kubeexec] DEBUG 2018/06/08 08:33:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_e2b01dc5bde10ca95a8fa2b92a019ca3 >Result: volume start: vol_e2b01dc5bde10ca95a8fa2b92a019ca3: success >[heketi] INFO 2018/06/08 08:33:49 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:33:49 asynchttp.go:292: Completed job b94ff76af50977cdc0c77fea87a14955 in 4.577582489s >[negroni] Started GET /queue/b94ff76af50977cdc0c77fea87a14955 >[negroni] Completed 303 See Other in 186.461µs >[negroni] Started GET /volumes/e2b01dc5bde10ca95a8fa2b92a019ca3 >[negroni] Completed 200 OK in 3.151584ms >[negroni] Started DELETE /volumes/e2b01dc5bde10ca95a8fa2b92a019ca3 >[negroni] Completed 202 Accepted in 10.219959ms >[asynchttp] INFO 2018/06/08 08:33:49 asynchttp.go:288: Started job 644371ffaefa640ed2e8f976a95e9dbb >[heketi] INFO 2018/06/08 08:33:49 Started async operation: Delete Volume >[negroni] Started GET /queue/644371ffaefa640ed2e8f976a95e9dbb >[negroni] Completed 200 OK in 158.844µs >[kubeexec] DEBUG 2018/06/08 08:33:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_e2b01dc5bde10ca95a8fa2b92a019ca3 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/644371ffaefa640ed2e8f976a95e9dbb >[negroni] Completed 200 OK in 257.151µs >[negroni] Started GET /queue/644371ffaefa640ed2e8f976a95e9dbb >[negroni] Completed 200 OK in 246.361µs >[kubeexec] DEBUG 2018/06/08 08:33:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume stop vol_e2b01dc5bde10ca95a8fa2b92a019ca3 force >Result: volume stop: vol_e2b01dc5bde10ca95a8fa2b92a019ca3: success >[kubeexec] DEBUG 2018/06/08 08:33:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume delete vol_e2b01dc5bde10ca95a8fa2b92a019ca3 >Result: volume delete: vol_e2b01dc5bde10ca95a8fa2b92a019ca3: success >[heketi] INFO 2018/06/08 08:33:52 Deleting brick 2fdd0b36617fe22a539bf2bb950d7a8c >[heketi] INFO 2018/06/08 08:33:52 Deleting brick 0f1d69bf8eee0f41a939f35c5a4e36e8 >[heketi] INFO 2018/06/08 08:33:52 Deleting brick d8e106a809d43a339c3db048d0b81831 >[kubeexec] DEBUG 2018/06/08 08:33:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0f1d69bf8eee0f41a939f35c5a4e36e8 | cut -d" " -f1 >Result: /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_0f1d69bf8eee0f41a939f35c5a4e36e8 >[kubeexec] DEBUG 2018/06/08 08:33:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d8e106a809d43a339c3db048d0b81831 | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d8e106a809d43a339c3db048d0b81831 >[kubeexec] DEBUG 2018/06/08 08:33:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_2fdd0b36617fe22a539bf2bb950d7a8c | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_2fdd0b36617fe22a539bf2bb950d7a8c >[kubeexec] DEBUG 2018/06/08 08:33:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_0f1d69bf8eee0f41a939f35c5a4e36e8 > >Result: vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_0f1d69bf8eee0f41a939f35c5a4e36e8 >[kubeexec] DEBUG 2018/06/08 08:33:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d8e106a809d43a339c3db048d0b81831 > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_d8e106a809d43a339c3db048d0b81831 >[negroni] Started GET /queue/644371ffaefa640ed2e8f976a95e9dbb >[negroni] Completed 200 OK in 185.554µs >[kubeexec] DEBUG 2018/06/08 08:33:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_2fdd0b36617fe22a539bf2bb950d7a8c > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_2fdd0b36617fe22a539bf2bb950d7a8c >[kubeexec] DEBUG 2018/06/08 08:33:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0f1d69bf8eee0f41a939f35c5a4e36e8 >Result: >[negroni] Started GET /queue/644371ffaefa640ed2e8f976a95e9dbb >[negroni] Completed 200 OK in 135.775µs >[kubeexec] DEBUG 2018/06/08 08:33:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d8e106a809d43a339c3db048d0b81831 >Result: >[kubeexec] DEBUG 2018/06/08 08:33:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_2fdd0b36617fe22a539bf2bb950d7a8c >Result: >[negroni] Started GET /queue/644371ffaefa640ed2e8f976a95e9dbb >[negroni] Completed 200 OK in 253.861µs >[kubeexec] DEBUG 2018/06/08 08:33:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_0f1d69bf8eee0f41a939f35c5a4e36e8/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:33:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_d8e106a809d43a339c3db048d0b81831/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:33:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_2fdd0b36617fe22a539bf2bb950d7a8c/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/644371ffaefa640ed2e8f976a95e9dbb >[negroni] Completed 200 OK in 218.103µs >[kubeexec] DEBUG 2018/06/08 08:33:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_0f1d69bf8eee0f41a939f35c5a4e36e8 > >Result: Logical volume "brick_0f1d69bf8eee0f41a939f35c5a4e36e8" successfully removed >[kubeexec] DEBUG 2018/06/08 08:33:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d8e106a809d43a339c3db048d0b81831 > >Result: Logical volume "brick_d8e106a809d43a339c3db048d0b81831" successfully removed >[kubeexec] DEBUG 2018/06/08 08:33:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_2fdd0b36617fe22a539bf2bb950d7a8c > >Result: Logical volume "brick_2fdd0b36617fe22a539bf2bb950d7a8c" successfully removed >[negroni] Started GET /queue/644371ffaefa640ed2e8f976a95e9dbb >[negroni] Completed 200 OK in 271.221µs >[kubeexec] DEBUG 2018/06/08 08:33:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_0f1d69bf8eee0f41a939f35c5a4e36e8 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:33:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_d8e106a809d43a339c3db048d0b81831 > >Result: 0 >[negroni] Started GET /queue/644371ffaefa640ed2e8f976a95e9dbb >[negroni] Completed 200 OK in 161.997µs >[kubeexec] DEBUG 2018/06/08 08:33:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_2fdd0b36617fe22a539bf2bb950d7a8c > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:33:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_0f1d69bf8eee0f41a939f35c5a4e36e8 > >Result: Logical volume "tp_0f1d69bf8eee0f41a939f35c5a4e36e8" successfully removed >[negroni] Started GET /queue/644371ffaefa640ed2e8f976a95e9dbb >[negroni] Completed 200 OK in 145.637µs >[kubeexec] DEBUG 2018/06/08 08:33:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_d8e106a809d43a339c3db048d0b81831 > >Result: Logical volume "tp_d8e106a809d43a339c3db048d0b81831" successfully removed >[kubeexec] DEBUG 2018/06/08 08:33:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_2fdd0b36617fe22a539bf2bb950d7a8c > >Result: Logical volume "tp_2fdd0b36617fe22a539bf2bb950d7a8c" successfully removed >[kubeexec] DEBUG 2018/06/08 08:33:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0f1d69bf8eee0f41a939f35c5a4e36e8 >Result: >[negroni] Started GET /queue/644371ffaefa640ed2e8f976a95e9dbb >[negroni] Completed 200 OK in 273.848µs >[kubeexec] DEBUG 2018/06/08 08:34:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d8e106a809d43a339c3db048d0b81831 >Result: >[kubeexec] DEBUG 2018/06/08 08:34:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_2fdd0b36617fe22a539bf2bb950d7a8c >Result: >[heketi] INFO 2018/06/08 08:34:00 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:34:00 asynchttp.go:292: Completed job 644371ffaefa640ed2e8f976a95e9dbb in 10.458558471s >[negroni] Started GET /queue/644371ffaefa640ed2e8f976a95e9dbb >[negroni] Completed 204 No Content in 177.206µs >[negroni] Started POST /devices/b6293dbff56320d01ba8a795f86a2b5f/state >[negroni] Completed 404 Not Found in 1.564239ms >[negroni] Started DELETE /volumes/793f2fea1ecd5540c6c7b9011fef2fef >[negroni] Completed 202 Accepted in 9.776324ms >[asynchttp] INFO 2018/06/08 08:34:01 asynchttp.go:288: Started job fb69590d1d0f76ff40b6a5068a0651c3 >[heketi] INFO 2018/06/08 08:34:01 Started async operation: Delete Volume >[negroni] Started GET /queue/fb69590d1d0f76ff40b6a5068a0651c3 >[negroni] Completed 200 OK in 124.772µs >[kubeexec] DEBUG 2018/06/08 08:34:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script snapshot list vol_793f2fea1ecd5540c6c7b9011fef2fef --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/fb69590d1d0f76ff40b6a5068a0651c3 >[negroni] Completed 200 OK in 234.287µs >[negroni] Started GET /queue/fb69590d1d0f76ff40b6a5068a0651c3 >[negroni] Completed 200 OK in 286.619µs >[kubeexec] DEBUG 2018/06/08 08:34:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume stop vol_793f2fea1ecd5540c6c7b9011fef2fef force >Result: volume stop: vol_793f2fea1ecd5540c6c7b9011fef2fef: success >[kubeexec] DEBUG 2018/06/08 08:34:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume delete vol_793f2fea1ecd5540c6c7b9011fef2fef >Result: volume delete: vol_793f2fea1ecd5540c6c7b9011fef2fef: success >[heketi] INFO 2018/06/08 08:34:03 Deleting brick 989fc70ef70fd18a678f7fc65ac3a56d >[heketi] INFO 2018/06/08 08:34:03 Deleting brick ce0cbbac0ac897ad4fe0a2d0068195ae >[heketi] INFO 2018/06/08 08:34:03 Deleting brick 53eb288b3c12a575d4cbef2bce0292ef >[kubeexec] DEBUG 2018/06/08 08:34:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_53eb288b3c12a575d4cbef2bce0292ef | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_53eb288b3c12a575d4cbef2bce0292ef >[kubeexec] DEBUG 2018/06/08 08:34:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_989fc70ef70fd18a678f7fc65ac3a56d | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_989fc70ef70fd18a678f7fc65ac3a56d >[kubeexec] DEBUG 2018/06/08 08:34:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ce0cbbac0ac897ad4fe0a2d0068195ae | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_ce0cbbac0ac897ad4fe0a2d0068195ae >[kubeexec] DEBUG 2018/06/08 08:34:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_53eb288b3c12a575d4cbef2bce0292ef > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_53eb288b3c12a575d4cbef2bce0292ef >[negroni] Started GET /queue/fb69590d1d0f76ff40b6a5068a0651c3 >[negroni] Completed 200 OK in 238.084µs >[kubeexec] DEBUG 2018/06/08 08:34:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_989fc70ef70fd18a678f7fc65ac3a56d > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_989fc70ef70fd18a678f7fc65ac3a56d >[kubeexec] DEBUG 2018/06/08 08:34:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_ce0cbbac0ac897ad4fe0a2d0068195ae > >Result: vg_3a4297677881963e3f80124971d50eea/tp_ce0cbbac0ac897ad4fe0a2d0068195ae >[kubeexec] DEBUG 2018/06/08 08:34:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_53eb288b3c12a575d4cbef2bce0292ef >Result: >[negroni] Started GET /queue/fb69590d1d0f76ff40b6a5068a0651c3 >[negroni] Completed 200 OK in 194.876µs >[kubeexec] DEBUG 2018/06/08 08:34:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_989fc70ef70fd18a678f7fc65ac3a56d >Result: >[kubeexec] DEBUG 2018/06/08 08:34:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ce0cbbac0ac897ad4fe0a2d0068195ae >Result: >[negroni] Started GET /queue/fb69590d1d0f76ff40b6a5068a0651c3 >[negroni] Completed 200 OK in 285.727µs >[kubeexec] DEBUG 2018/06/08 08:34:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_53eb288b3c12a575d4cbef2bce0292ef/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:34:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_989fc70ef70fd18a678f7fc65ac3a56d/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/fb69590d1d0f76ff40b6a5068a0651c3 >[negroni] Completed 200 OK in 267.254µs >[kubeexec] DEBUG 2018/06/08 08:34:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_ce0cbbac0ac897ad4fe0a2d0068195ae/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:34:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_53eb288b3c12a575d4cbef2bce0292ef > >Result: Logical volume "brick_53eb288b3c12a575d4cbef2bce0292ef" successfully removed >[kubeexec] DEBUG 2018/06/08 08:34:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_989fc70ef70fd18a678f7fc65ac3a56d > >Result: Logical volume "brick_989fc70ef70fd18a678f7fc65ac3a56d" successfully removed >[negroni] Started GET /queue/fb69590d1d0f76ff40b6a5068a0651c3 >[negroni] Completed 200 OK in 143.391µs >[kubeexec] DEBUG 2018/06/08 08:34:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_ce0cbbac0ac897ad4fe0a2d0068195ae > >Result: Logical volume "brick_ce0cbbac0ac897ad4fe0a2d0068195ae" successfully removed >[kubeexec] DEBUG 2018/06/08 08:34:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_53eb288b3c12a575d4cbef2bce0292ef > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:34:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_989fc70ef70fd18a678f7fc65ac3a56d > >Result: 0 >[negroni] Started GET /queue/fb69590d1d0f76ff40b6a5068a0651c3 >[negroni] Completed 200 OK in 168.769µs >[kubeexec] DEBUG 2018/06/08 08:34:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_ce0cbbac0ac897ad4fe0a2d0068195ae > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:34:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_53eb288b3c12a575d4cbef2bce0292ef > >Result: Logical volume "tp_53eb288b3c12a575d4cbef2bce0292ef" successfully removed >[kubeexec] DEBUG 2018/06/08 08:34:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_989fc70ef70fd18a678f7fc65ac3a56d > >Result: Logical volume "tp_989fc70ef70fd18a678f7fc65ac3a56d" successfully removed >[negroni] Started GET /queue/fb69590d1d0f76ff40b6a5068a0651c3 >[negroni] Completed 200 OK in 141.103µs >[kubeexec] DEBUG 2018/06/08 08:34:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_ce0cbbac0ac897ad4fe0a2d0068195ae > >Result: Logical volume "tp_ce0cbbac0ac897ad4fe0a2d0068195ae" successfully removed >[kubeexec] DEBUG 2018/06/08 08:34:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_53eb288b3c12a575d4cbef2bce0292ef >Result: >[negroni] Started GET /queue/fb69590d1d0f76ff40b6a5068a0651c3 >[negroni] Completed 200 OK in 158.441µs >[kubeexec] DEBUG 2018/06/08 08:34:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_989fc70ef70fd18a678f7fc65ac3a56d >Result: >[kubeexec] DEBUG 2018/06/08 08:34:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ce0cbbac0ac897ad4fe0a2d0068195ae >Result: >[heketi] INFO 2018/06/08 08:34:11 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:34:11 asynchttp.go:292: Completed job fb69590d1d0f76ff40b6a5068a0651c3 in 10.554691286s >[negroni] Started GET /queue/fb69590d1d0f76ff40b6a5068a0651c3 >[negroni] Completed 204 No Content in 172.12µs >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:34:12 Allocating brick set #0 >[negroni] Completed 202 Accepted in 17.374374ms >[asynchttp] INFO 2018/06/08 08:34:12 asynchttp.go:288: Started job 2bbaf93134b525b49c799f542f62ac0c >[heketi] INFO 2018/06/08 08:34:12 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:34:12 Creating brick 506f35d8abd5d43ae307f4211452c5d9 >[heketi] INFO 2018/06/08 08:34:12 Creating brick 3a7d32deb339b6fa432b0fcbc36ae94c >[heketi] INFO 2018/06/08 08:34:12 Creating brick d59b610bd2c0773ffd302f1611ae4a02 >[negroni] Started GET /queue/2bbaf93134b525b49c799f542f62ac0c >[negroni] Completed 200 OK in 150.676µs >[kubeexec] DEBUG 2018/06/08 08:34:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d59b610bd2c0773ffd302f1611ae4a02 >Result: >[kubeexec] DEBUG 2018/06/08 08:34:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3a7d32deb339b6fa432b0fcbc36ae94c >Result: >[kubeexec] DEBUG 2018/06/08 08:34:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_d389f0278a774bd7443a09af960961d8/tp_d59b610bd2c0773ffd302f1611ae4a02 --virtualsize 1048576K --name brick_d59b610bd2c0773ffd302f1611ae4a02 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_d59b610bd2c0773ffd302f1611ae4a02" created. >[kubeexec] DEBUG 2018/06/08 08:34:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_506f35d8abd5d43ae307f4211452c5d9 >Result: >[kubeexec] DEBUG 2018/06/08 08:34:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_d59b610bd2c0773ffd302f1611ae4a02 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_d59b610bd2c0773ffd302f1611ae4a02 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:34:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_3a7d32deb339b6fa432b0fcbc36ae94c --virtualsize 1048576K --name brick_3a7d32deb339b6fa432b0fcbc36ae94c >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_3a7d32deb339b6fa432b0fcbc36ae94c" created. >[kubeexec] DEBUG 2018/06/08 08:34:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_d59b610bd2c0773ffd302f1611ae4a02 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d59b610bd2c0773ffd302f1611ae4a02 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:34:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_3a7d32deb339b6fa432b0fcbc36ae94c >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_3a7d32deb339b6fa432b0fcbc36ae94c isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:34:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_d59b610bd2c0773ffd302f1611ae4a02 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d59b610bd2c0773ffd302f1611ae4a02 >Result: >[kubeexec] DEBUG 2018/06/08 08:34:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_506f35d8abd5d43ae307f4211452c5d9 --virtualsize 1048576K --name brick_506f35d8abd5d43ae307f4211452c5d9 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_506f35d8abd5d43ae307f4211452c5d9" created. >[kubeexec] DEBUG 2018/06/08 08:34:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_3a7d32deb339b6fa432b0fcbc36ae94c /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3a7d32deb339b6fa432b0fcbc36ae94c xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:34:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d59b610bd2c0773ffd302f1611ae4a02/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:34:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_506f35d8abd5d43ae307f4211452c5d9 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_506f35d8abd5d43ae307f4211452c5d9 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/2bbaf93134b525b49c799f542f62ac0c >[negroni] Completed 200 OK in 170.06µs >[kubeexec] DEBUG 2018/06/08 08:34:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_3a7d32deb339b6fa432b0fcbc36ae94c /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3a7d32deb339b6fa432b0fcbc36ae94c >Result: >[kubeexec] DEBUG 2018/06/08 08:34:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_506f35d8abd5d43ae307f4211452c5d9 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_506f35d8abd5d43ae307f4211452c5d9 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:34:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3a7d32deb339b6fa432b0fcbc36ae94c/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:34:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_506f35d8abd5d43ae307f4211452c5d9 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_506f35d8abd5d43ae307f4211452c5d9 >Result: >[kubeexec] DEBUG 2018/06/08 08:34:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_506f35d8abd5d43ae307f4211452c5d9/brick >Result: >[cmdexec] INFO 2018/06/08 08:34:13 Creating volume vol_79d83a099d1495f94123b0ca3fa0bac2 replica 3 >[kubeexec] DEBUG 2018/06/08 08:34:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_79d83a099d1495f94123b0ca3fa0bac2 replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_506f35d8abd5d43ae307f4211452c5d9/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d59b610bd2c0773ffd302f1611ae4a02/brick 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3a7d32deb339b6fa432b0fcbc36ae94c/brick >Result: volume create: vol_79d83a099d1495f94123b0ca3fa0bac2: success: please start the volume to access data >[negroni] Started GET /queue/2bbaf93134b525b49c799f542f62ac0c >[negroni] Completed 200 OK in 130.126µs >[negroni] Started GET /queue/2bbaf93134b525b49c799f542f62ac0c >[negroni] Completed 200 OK in 128.348µs >[negroni] Started GET /queue/2bbaf93134b525b49c799f542f62ac0c >[negroni] Completed 200 OK in 165.648µs >[negroni] Started GET /queue/2bbaf93134b525b49c799f542f62ac0c >[negroni] Completed 200 OK in 229.158µs >[kubeexec] DEBUG 2018/06/08 08:34:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_79d83a099d1495f94123b0ca3fa0bac2 >Result: volume start: vol_79d83a099d1495f94123b0ca3fa0bac2: success >[heketi] INFO 2018/06/08 08:34:18 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:34:18 asynchttp.go:292: Completed job 2bbaf93134b525b49c799f542f62ac0c in 5.803275233s >[negroni] Started GET /queue/2bbaf93134b525b49c799f542f62ac0c >[negroni] Completed 303 See Other in 235.328µs >[negroni] Started GET /volumes/79d83a099d1495f94123b0ca3fa0bac2 >[negroni] Completed 200 OK in 3.572477ms >[negroni] Started DELETE /volumes/79d83a099d1495f94123b0ca3fa0bac2 >[negroni] Completed 202 Accepted in 14.139887ms >[asynchttp] INFO 2018/06/08 08:34:18 asynchttp.go:288: Started job 7c1b072ba88366cedd6a5c3427092a14 >[heketi] INFO 2018/06/08 08:34:18 Started async operation: Delete Volume >[negroni] Started GET /queue/7c1b072ba88366cedd6a5c3427092a14 >[negroni] Completed 200 OK in 148.66µs >[kubeexec] DEBUG 2018/06/08 08:34:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_79d83a099d1495f94123b0ca3fa0bac2 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/7c1b072ba88366cedd6a5c3427092a14 >[negroni] Completed 200 OK in 213.843µs >[negroni] Started GET /queue/7c1b072ba88366cedd6a5c3427092a14 >[negroni] Completed 200 OK in 240.44µs >[negroni] Started GET /queue/7c1b072ba88366cedd6a5c3427092a14 >[negroni] Completed 200 OK in 215.6µs >[kubeexec] DEBUG 2018/06/08 08:34:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume stop vol_79d83a099d1495f94123b0ca3fa0bac2 force >Result: volume stop: vol_79d83a099d1495f94123b0ca3fa0bac2: success >[kubeexec] DEBUG 2018/06/08 08:34:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume delete vol_79d83a099d1495f94123b0ca3fa0bac2 >Result: volume delete: vol_79d83a099d1495f94123b0ca3fa0bac2: success >[heketi] INFO 2018/06/08 08:34:22 Deleting brick 3a7d32deb339b6fa432b0fcbc36ae94c >[heketi] INFO 2018/06/08 08:34:22 Deleting brick 506f35d8abd5d43ae307f4211452c5d9 >[heketi] INFO 2018/06/08 08:34:22 Deleting brick d59b610bd2c0773ffd302f1611ae4a02 >[kubeexec] DEBUG 2018/06/08 08:34:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3a7d32deb339b6fa432b0fcbc36ae94c | cut -d" " -f1 >Result: /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_3a7d32deb339b6fa432b0fcbc36ae94c >[kubeexec] DEBUG 2018/06/08 08:34:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d59b610bd2c0773ffd302f1611ae4a02 | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_d59b610bd2c0773ffd302f1611ae4a02 >[kubeexec] DEBUG 2018/06/08 08:34:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_506f35d8abd5d43ae307f4211452c5d9 | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_506f35d8abd5d43ae307f4211452c5d9 >[negroni] Started GET /queue/7c1b072ba88366cedd6a5c3427092a14 >[negroni] Completed 200 OK in 155.613µs >[kubeexec] DEBUG 2018/06/08 08:34:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_3a7d32deb339b6fa432b0fcbc36ae94c > >Result: vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_3a7d32deb339b6fa432b0fcbc36ae94c >[kubeexec] DEBUG 2018/06/08 08:34:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_d59b610bd2c0773ffd302f1611ae4a02 > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_d59b610bd2c0773ffd302f1611ae4a02 >[kubeexec] DEBUG 2018/06/08 08:34:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_506f35d8abd5d43ae307f4211452c5d9 > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_506f35d8abd5d43ae307f4211452c5d9 >[negroni] Started GET /queue/7c1b072ba88366cedd6a5c3427092a14 >[negroni] Completed 200 OK in 201.088µs >[kubeexec] DEBUG 2018/06/08 08:34:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3a7d32deb339b6fa432b0fcbc36ae94c >Result: >[kubeexec] DEBUG 2018/06/08 08:34:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d59b610bd2c0773ffd302f1611ae4a02 >Result: >[kubeexec] DEBUG 2018/06/08 08:34:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_506f35d8abd5d43ae307f4211452c5d9 >Result: >[negroni] Started GET /queue/7c1b072ba88366cedd6a5c3427092a14 >[negroni] Completed 200 OK in 197.278µs >[kubeexec] DEBUG 2018/06/08 08:34:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_3a7d32deb339b6fa432b0fcbc36ae94c/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:34:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_d59b610bd2c0773ffd302f1611ae4a02/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:34:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_506f35d8abd5d43ae307f4211452c5d9/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/7c1b072ba88366cedd6a5c3427092a14 >[negroni] Completed 200 OK in 185.416µs >[kubeexec] DEBUG 2018/06/08 08:34:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_3a7d32deb339b6fa432b0fcbc36ae94c > >Result: Logical volume "brick_3a7d32deb339b6fa432b0fcbc36ae94c" successfully removed >[kubeexec] DEBUG 2018/06/08 08:34:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_d59b610bd2c0773ffd302f1611ae4a02 > >Result: Logical volume "brick_d59b610bd2c0773ffd302f1611ae4a02" successfully removed >[kubeexec] DEBUG 2018/06/08 08:34:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_506f35d8abd5d43ae307f4211452c5d9 > >Result: Logical volume "brick_506f35d8abd5d43ae307f4211452c5d9" successfully removed >[negroni] Started GET /queue/7c1b072ba88366cedd6a5c3427092a14 >[negroni] Completed 200 OK in 202.582µs >[kubeexec] DEBUG 2018/06/08 08:34:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_3a7d32deb339b6fa432b0fcbc36ae94c > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:34:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_d59b610bd2c0773ffd302f1611ae4a02 > >Result: 0 >[negroni] Started GET /queue/7c1b072ba88366cedd6a5c3427092a14 >[negroni] Completed 200 OK in 212.534µs >[kubeexec] DEBUG 2018/06/08 08:34:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_506f35d8abd5d43ae307f4211452c5d9 > >Result: 0 >[negroni] Started GET /queue/7c1b072ba88366cedd6a5c3427092a14 >[negroni] Completed 200 OK in 131.39µs >[kubeexec] DEBUG 2018/06/08 08:34:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_3a7d32deb339b6fa432b0fcbc36ae94c > >Result: Logical volume "tp_3a7d32deb339b6fa432b0fcbc36ae94c" successfully removed >[kubeexec] DEBUG 2018/06/08 08:34:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_d59b610bd2c0773ffd302f1611ae4a02 > >Result: Logical volume "tp_d59b610bd2c0773ffd302f1611ae4a02" successfully removed >[kubeexec] DEBUG 2018/06/08 08:34:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_506f35d8abd5d43ae307f4211452c5d9 > >Result: Logical volume "tp_506f35d8abd5d43ae307f4211452c5d9" successfully removed >[negroni] Started GET /queue/7c1b072ba88366cedd6a5c3427092a14 >[negroni] Completed 200 OK in 203.682µs >[kubeexec] DEBUG 2018/06/08 08:34:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3a7d32deb339b6fa432b0fcbc36ae94c >Result: >[kubeexec] DEBUG 2018/06/08 08:34:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d59b610bd2c0773ffd302f1611ae4a02 >Result: >[kubeexec] DEBUG 2018/06/08 08:34:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_506f35d8abd5d43ae307f4211452c5d9 >Result: >[heketi] INFO 2018/06/08 08:34:30 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:34:30 asynchttp.go:292: Completed job 7c1b072ba88366cedd6a5c3427092a14 in 11.572650265s >[negroni] Started GET /queue/7c1b072ba88366cedd6a5c3427092a14 >[negroni] Completed 204 No Content in 224.231µs >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:34:30 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:34:30 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:34:30 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:34:30 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:34:30 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:34:30 Allocating brick set #2 >[heketi] INFO 2018/06/08 08:34:30 Allocating brick set #3 >[heketi] INFO 2018/06/08 08:34:30 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:34:30 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:34:30 Allocating brick set #2 >[heketi] INFO 2018/06/08 08:34:30 Allocating brick set #3 >[heketi] INFO 2018/06/08 08:34:30 Allocating brick set #4 >[heketi] INFO 2018/06/08 08:34:30 Allocating brick set #5 >[heketi] INFO 2018/06/08 08:34:30 Allocating brick set #6 >[heketi] ERROR 2018/06/08 08:34:30 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1167: Create Volume Build Failed: No space >[negroni] Completed 500 Internal Server Error in 20.545628ms >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:34:30 Allocating brick set #0 >[negroni] Completed 202 Accepted in 15.097946ms >[asynchttp] INFO 2018/06/08 08:34:30 asynchttp.go:288: Started job ec8ee4e8fce2ab6e4876ed8c4f16b20f >[heketi] INFO 2018/06/08 08:34:30 Started async operation: Create Volume >[negroni] Started GET /queue/ec8ee4e8fce2ab6e4876ed8c4f16b20f >[heketi] INFO 2018/06/08 08:34:30 Creating brick 50f7fa4e28c0ada62aee23b461d4a57b >[negroni] Completed 200 OK in 149.632µs >[heketi] INFO 2018/06/08 08:34:30 Creating brick c91ff80db7a76c089815fbb5acd3b359 >[heketi] INFO 2018/06/08 08:34:30 Creating brick 3ffb812f7f348d836df1be2db94be607 >[kubeexec] DEBUG 2018/06/08 08:34:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3ffb812f7f348d836df1be2db94be607 >Result: >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_50f7fa4e28c0ada62aee23b461d4a57b >Result: >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_c91ff80db7a76c089815fbb5acd3b359 >Result: >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_3ffb812f7f348d836df1be2db94be607 --virtualsize 1048576K --name brick_3ffb812f7f348d836df1be2db94be607 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_3ffb812f7f348d836df1be2db94be607" created. >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_50f7fa4e28c0ada62aee23b461d4a57b --virtualsize 1048576K --name brick_50f7fa4e28c0ada62aee23b461d4a57b >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_50f7fa4e28c0ada62aee23b461d4a57b" created. >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_3ffb812f7f348d836df1be2db94be607 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_3ffb812f7f348d836df1be2db94be607 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_50f7fa4e28c0ada62aee23b461d4a57b >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_50f7fa4e28c0ada62aee23b461d4a57b isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_3ffb812f7f348d836df1be2db94be607 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3ffb812f7f348d836df1be2db94be607 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_3a4297677881963e3f80124971d50eea/tp_c91ff80db7a76c089815fbb5acd3b359 --virtualsize 1048576K --name brick_c91ff80db7a76c089815fbb5acd3b359 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_c91ff80db7a76c089815fbb5acd3b359" created. >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_50f7fa4e28c0ada62aee23b461d4a57b /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_50f7fa4e28c0ada62aee23b461d4a57b xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_3ffb812f7f348d836df1be2db94be607 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3ffb812f7f348d836df1be2db94be607 >Result: >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_c91ff80db7a76c089815fbb5acd3b359 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_c91ff80db7a76c089815fbb5acd3b359 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_50f7fa4e28c0ada62aee23b461d4a57b /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_50f7fa4e28c0ada62aee23b461d4a57b >Result: >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3ffb812f7f348d836df1be2db94be607/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_c91ff80db7a76c089815fbb5acd3b359 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_c91ff80db7a76c089815fbb5acd3b359 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/ec8ee4e8fce2ab6e4876ed8c4f16b20f >[negroni] Completed 200 OK in 222.959µs >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_50f7fa4e28c0ada62aee23b461d4a57b/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:34:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_c91ff80db7a76c089815fbb5acd3b359 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_c91ff80db7a76c089815fbb5acd3b359 >Result: >[kubeexec] DEBUG 2018/06/08 08:34:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_c91ff80db7a76c089815fbb5acd3b359/brick >Result: >[cmdexec] INFO 2018/06/08 08:34:32 Creating volume vol_93defdf227537c8facf542c2a30034c5 replica 3 >[kubeexec] DEBUG 2018/06/08 08:34:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_93defdf227537c8facf542c2a30034c5 replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_50f7fa4e28c0ada62aee23b461d4a57b/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3ffb812f7f348d836df1be2db94be607/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_c91ff80db7a76c089815fbb5acd3b359/brick >Result: volume create: vol_93defdf227537c8facf542c2a30034c5: success: please start the volume to access data >[negroni] Started GET /queue/ec8ee4e8fce2ab6e4876ed8c4f16b20f >[negroni] Completed 200 OK in 131.596µs >[negroni] Started GET /queue/ec8ee4e8fce2ab6e4876ed8c4f16b20f >[negroni] Completed 200 OK in 149.645µs >[negroni] Started GET /queue/ec8ee4e8fce2ab6e4876ed8c4f16b20f >[negroni] Completed 200 OK in 164.553µs >[negroni] Started GET /queue/ec8ee4e8fce2ab6e4876ed8c4f16b20f >[negroni] Completed 200 OK in 171.589µs >[kubeexec] DEBUG 2018/06/08 08:34:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_93defdf227537c8facf542c2a30034c5 >Result: volume start: vol_93defdf227537c8facf542c2a30034c5: success >[heketi] INFO 2018/06/08 08:34:36 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:34:36 asynchttp.go:292: Completed job ec8ee4e8fce2ab6e4876ed8c4f16b20f in 5.709673516s >[negroni] Started GET /queue/ec8ee4e8fce2ab6e4876ed8c4f16b20f >[negroni] Completed 303 See Other in 297.284µs >[negroni] Started GET /volumes/93defdf227537c8facf542c2a30034c5 >[negroni] Completed 200 OK in 3.673265ms >[negroni] Started POST /volumes/93defdf227537c8facf542c2a30034c5/expand >[heketi] INFO 2018/06/08 08:34:36 Allocating brick set #0 >[negroni] Completed 202 Accepted in 16.658397ms >[asynchttp] INFO 2018/06/08 08:34:36 asynchttp.go:288: Started job 6652127cae25ca9516aead3a36fab9ac >[heketi] INFO 2018/06/08 08:34:36 Started async operation: Expand Volume >[heketi] INFO 2018/06/08 08:34:36 Creating brick 0fe2e9af9fd76f4fccdcf0e37a1f89ff >[heketi] INFO 2018/06/08 08:34:36 Creating brick a0e0349eea524aa0f4d3ab0a34fa1304 >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 112.114µs >[heketi] INFO 2018/06/08 08:34:36 Creating brick 5bec7c663a12e79390ec9be885d88a1d >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a0e0349eea524aa0f4d3ab0a34fa1304 >Result: >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_5bec7c663a12e79390ec9be885d88a1d >Result: >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff >Result: >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_0fe2e9af9fd76f4fccdcf0e37a1f89ff --virtualsize 2097152K --name brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff" created. >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_a0e0349eea524aa0f4d3ab0a34fa1304 --virtualsize 2097152K --name brick_a0e0349eea524aa0f4d3ab0a34fa1304 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_a0e0349eea524aa0f4d3ab0a34fa1304" created. >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_5bec7c663a12e79390ec9be885d88a1d --virtualsize 2097152K --name brick_5bec7c663a12e79390ec9be885d88a1d >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_5bec7c663a12e79390ec9be885d88a1d" created. >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_5bec7c663a12e79390ec9be885d88a1d >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_5bec7c663a12e79390ec9be885d88a1d isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_a0e0349eea524aa0f4d3ab0a34fa1304 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_a0e0349eea524aa0f4d3ab0a34fa1304 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_a0e0349eea524aa0f4d3ab0a34fa1304 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a0e0349eea524aa0f4d3ab0a34fa1304 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_5bec7c663a12e79390ec9be885d88a1d /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_5bec7c663a12e79390ec9be885d88a1d xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff >Result: >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_a0e0349eea524aa0f4d3ab0a34fa1304 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a0e0349eea524aa0f4d3ab0a34fa1304 >Result: >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_5bec7c663a12e79390ec9be885d88a1d /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_5bec7c663a12e79390ec9be885d88a1d >Result: >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 197.518µs >[kubeexec] DEBUG 2018/06/08 08:34:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:34:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a0e0349eea524aa0f4d3ab0a34fa1304/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:34:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_5bec7c663a12e79390ec9be885d88a1d/brick >Result: >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 130.119µs >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 133.441µs >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 125.648µs >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 191.362µs >[kubeexec] DEBUG 2018/06/08 08:34:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume add-brick vol_93defdf227537c8facf542c2a30034c5 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_5bec7c663a12e79390ec9be885d88a1d/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a0e0349eea524aa0f4d3ab0a34fa1304/brick >Result: volume add-brick: success >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 223.595µs >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 133.99µs >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 170.178µs >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 331.391µs >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 142.35µs >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 173.185µs >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 216.519µs >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 150.373µs >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 180.087µs >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 200 OK in 148.493µs >[kubeexec] DEBUG 2018/06/08 08:34:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume rebalance vol_93defdf227537c8facf542c2a30034c5 start >Result: volume rebalance: vol_93defdf227537c8facf542c2a30034c5: success: Rebalance on vol_93defdf227537c8facf542c2a30034c5 has been started successfully. Use rebalance status command to check status of the rebalance process. >ID: 1275029d-65a4-46a1-b8ea-a86ac0817ba6 > >[heketi] INFO 2018/06/08 08:34:52 Expand Volume succeeded >[asynchttp] INFO 2018/06/08 08:34:52 asynchttp.go:292: Completed job 6652127cae25ca9516aead3a36fab9ac in 15.731453773s >[negroni] Started GET /queue/6652127cae25ca9516aead3a36fab9ac >[negroni] Completed 303 See Other in 265.694µs >[negroni] Started GET /volumes/93defdf227537c8facf542c2a30034c5 >[negroni] Completed 200 OK in 4.711108ms >[negroni] Started GET /volumes/93defdf227537c8facf542c2a30034c5 >[negroni] Completed 200 OK in 1.630447ms >[negroni] Started DELETE /volumes/93defdf227537c8facf542c2a30034c5 >[negroni] Completed 202 Accepted in 10.753205ms >[asynchttp] INFO 2018/06/08 08:34:53 asynchttp.go:288: Started job cc8c30c83cc8de532e15e823f1889908 >[heketi] INFO 2018/06/08 08:34:53 Started async operation: Delete Volume >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 111.942µs >[kubeexec] DEBUG 2018/06/08 08:34:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_93defdf227537c8facf542c2a30034c5 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 162.665µs >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 268.718µs >[kubeexec] DEBUG 2018/06/08 08:34:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume stop vol_93defdf227537c8facf542c2a30034c5 force >Result: volume stop: vol_93defdf227537c8facf542c2a30034c5: success >[kubeexec] DEBUG 2018/06/08 08:34:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume delete vol_93defdf227537c8facf542c2a30034c5 >Result: volume delete: vol_93defdf227537c8facf542c2a30034c5: success >[heketi] INFO 2018/06/08 08:34:56 Deleting brick a0e0349eea524aa0f4d3ab0a34fa1304 >[heketi] INFO 2018/06/08 08:34:56 Deleting brick 0fe2e9af9fd76f4fccdcf0e37a1f89ff >[heketi] INFO 2018/06/08 08:34:56 Deleting brick 5bec7c663a12e79390ec9be885d88a1d >[heketi] INFO 2018/06/08 08:34:56 Deleting brick c91ff80db7a76c089815fbb5acd3b359 >[heketi] INFO 2018/06/08 08:34:56 Deleting brick 3ffb812f7f348d836df1be2db94be607 >[heketi] INFO 2018/06/08 08:34:56 Deleting brick 50f7fa4e28c0ada62aee23b461d4a57b >[kubeexec] DEBUG 2018/06/08 08:34:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff >[kubeexec] DEBUG 2018/06/08 08:34:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_5bec7c663a12e79390ec9be885d88a1d | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_5bec7c663a12e79390ec9be885d88a1d >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 155.619µs >[kubeexec] DEBUG 2018/06/08 08:34:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a0e0349eea524aa0f4d3ab0a34fa1304 | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_a0e0349eea524aa0f4d3ab0a34fa1304 >[kubeexec] DEBUG 2018/06/08 08:34:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_c91ff80db7a76c089815fbb5acd3b359 | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_c91ff80db7a76c089815fbb5acd3b359 >[kubeexec] DEBUG 2018/06/08 08:34:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_50f7fa4e28c0ada62aee23b461d4a57b | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_50f7fa4e28c0ada62aee23b461d4a57b >[kubeexec] DEBUG 2018/06/08 08:34:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3ffb812f7f348d836df1be2db94be607 | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_3ffb812f7f348d836df1be2db94be607 >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 229.858µs >[kubeexec] DEBUG 2018/06/08 08:34:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff > >Result: vg_3a4297677881963e3f80124971d50eea/tp_0fe2e9af9fd76f4fccdcf0e37a1f89ff >[kubeexec] DEBUG 2018/06/08 08:34:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_5bec7c663a12e79390ec9be885d88a1d > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_5bec7c663a12e79390ec9be885d88a1d >[kubeexec] DEBUG 2018/06/08 08:34:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_a0e0349eea524aa0f4d3ab0a34fa1304 > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_a0e0349eea524aa0f4d3ab0a34fa1304 >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 154.737µs >[kubeexec] DEBUG 2018/06/08 08:34:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_c91ff80db7a76c089815fbb5acd3b359 > >Result: vg_3a4297677881963e3f80124971d50eea/tp_c91ff80db7a76c089815fbb5acd3b359 >[kubeexec] DEBUG 2018/06/08 08:34:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_50f7fa4e28c0ada62aee23b461d4a57b > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_50f7fa4e28c0ada62aee23b461d4a57b >[kubeexec] DEBUG 2018/06/08 08:34:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_3ffb812f7f348d836df1be2db94be607 > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_3ffb812f7f348d836df1be2db94be607 >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 250.558µs >[kubeexec] DEBUG 2018/06/08 08:34:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff >Result: >[kubeexec] DEBUG 2018/06/08 08:35:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_5bec7c663a12e79390ec9be885d88a1d >Result: >[kubeexec] DEBUG 2018/06/08 08:35:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a0e0349eea524aa0f4d3ab0a34fa1304 >Result: >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 203.331µs >[kubeexec] DEBUG 2018/06/08 08:35:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_c91ff80db7a76c089815fbb5acd3b359 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_50f7fa4e28c0ada62aee23b461d4a57b >Result: >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 145.776µs >[kubeexec] DEBUG 2018/06/08 08:35:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3ffb812f7f348d836df1be2db94be607 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 176.777µs >[kubeexec] DEBUG 2018/06/08 08:35:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_5bec7c663a12e79390ec9be885d88a1d/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:35:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_a0e0349eea524aa0f4d3ab0a34fa1304/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 265.898µs >[kubeexec] DEBUG 2018/06/08 08:35:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_c91ff80db7a76c089815fbb5acd3b359/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:35:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_50f7fa4e28c0ada62aee23b461d4a57b/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:35:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_3ffb812f7f348d836df1be2db94be607/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 220.631µs >[kubeexec] DEBUG 2018/06/08 08:35:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff > >Result: Logical volume "brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff" successfully removed >[kubeexec] DEBUG 2018/06/08 08:35:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_5bec7c663a12e79390ec9be885d88a1d > >Result: Logical volume "brick_5bec7c663a12e79390ec9be885d88a1d" successfully removed >[kubeexec] DEBUG 2018/06/08 08:35:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_a0e0349eea524aa0f4d3ab0a34fa1304 > >Result: Logical volume "brick_a0e0349eea524aa0f4d3ab0a34fa1304" successfully removed >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 269.434µs >[kubeexec] DEBUG 2018/06/08 08:35:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_c91ff80db7a76c089815fbb5acd3b359 > >Result: Logical volume "brick_c91ff80db7a76c089815fbb5acd3b359" successfully removed >[kubeexec] DEBUG 2018/06/08 08:35:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_50f7fa4e28c0ada62aee23b461d4a57b > >Result: Logical volume "brick_50f7fa4e28c0ada62aee23b461d4a57b" successfully removed >[kubeexec] DEBUG 2018/06/08 08:35:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_3ffb812f7f348d836df1be2db94be607 > >Result: Logical volume "brick_3ffb812f7f348d836df1be2db94be607" successfully removed >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 154.638µs >[kubeexec] DEBUG 2018/06/08 08:35:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_0fe2e9af9fd76f4fccdcf0e37a1f89ff > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:35:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_5bec7c663a12e79390ec9be885d88a1d > >Result: 0 >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 292.724µs >[kubeexec] DEBUG 2018/06/08 08:35:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_a0e0349eea524aa0f4d3ab0a34fa1304 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:35:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_c91ff80db7a76c089815fbb5acd3b359 > >Result: 0 >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 213.515µs >[kubeexec] DEBUG 2018/06/08 08:35:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_50f7fa4e28c0ada62aee23b461d4a57b > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:35:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_3ffb812f7f348d836df1be2db94be607 > >Result: 0 >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 150.455µs >[kubeexec] DEBUG 2018/06/08 08:35:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_0fe2e9af9fd76f4fccdcf0e37a1f89ff > >Result: Logical volume "tp_0fe2e9af9fd76f4fccdcf0e37a1f89ff" successfully removed >[kubeexec] DEBUG 2018/06/08 08:35:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_5bec7c663a12e79390ec9be885d88a1d > >Result: Logical volume "tp_5bec7c663a12e79390ec9be885d88a1d" successfully removed >[kubeexec] DEBUG 2018/06/08 08:35:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_a0e0349eea524aa0f4d3ab0a34fa1304 > >Result: Logical volume "tp_a0e0349eea524aa0f4d3ab0a34fa1304" successfully removed >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 164.743µs >[kubeexec] DEBUG 2018/06/08 08:35:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_c91ff80db7a76c089815fbb5acd3b359 > >Result: Logical volume "tp_c91ff80db7a76c089815fbb5acd3b359" successfully removed >[kubeexec] DEBUG 2018/06/08 08:35:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_50f7fa4e28c0ada62aee23b461d4a57b > >Result: Logical volume "tp_50f7fa4e28c0ada62aee23b461d4a57b" successfully removed >[kubeexec] DEBUG 2018/06/08 08:35:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_3ffb812f7f348d836df1be2db94be607 > >Result: Logical volume "tp_3ffb812f7f348d836df1be2db94be607" successfully removed >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 144.584µs >[kubeexec] DEBUG 2018/06/08 08:35:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_0fe2e9af9fd76f4fccdcf0e37a1f89ff >Result: >[kubeexec] DEBUG 2018/06/08 08:35:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_5bec7c663a12e79390ec9be885d88a1d >Result: >[kubeexec] DEBUG 2018/06/08 08:35:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a0e0349eea524aa0f4d3ab0a34fa1304 >Result: >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 256.528µs >[kubeexec] DEBUG 2018/06/08 08:35:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_c91ff80db7a76c089815fbb5acd3b359 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_50f7fa4e28c0ada62aee23b461d4a57b >Result: >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 200 OK in 214.385µs >[kubeexec] DEBUG 2018/06/08 08:35:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3ffb812f7f348d836df1be2db94be607 >Result: >[heketi] INFO 2018/06/08 08:35:13 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:35:13 asynchttp.go:292: Completed job cc8c30c83cc8de532e15e823f1889908 in 20.129522656s >[heketi] INFO 2018/06/08 08:35:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:35:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:35:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 11min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ9396 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:35:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:35:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:35:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 11min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ9326 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:35:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:35:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[negroni] Started GET /queue/cc8c30c83cc8de532e15e823f1889908 >[negroni] Completed 204 No Content in 142.056µs >[kubeexec] DEBUG 2018/06/08 08:35:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 9min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ7514 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:35:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:35:14 Cleaned 0 nodes from health cache >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:35:15 Allocating brick set #0 >[negroni] Completed 202 Accepted in 18.143951ms >[asynchttp] INFO 2018/06/08 08:35:15 asynchttp.go:288: Started job c2357b5efc1fbcc9cef37b416ff6b7bc >[heketi] INFO 2018/06/08 08:35:15 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:35:15 Creating brick 924c9d4c901ad0decf1177966490ab2d >[heketi] INFO 2018/06/08 08:35:15 Creating brick 0a21bd3dabbed98904d8c343a304aa58 >[heketi] INFO 2018/06/08 08:35:15 Creating brick 3075e0705c078c9e77f2395371953343 >[negroni] Started GET /queue/c2357b5efc1fbcc9cef37b416ff6b7bc >[negroni] Completed 200 OK in 177.428µs >[kubeexec] DEBUG 2018/06/08 08:35:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_924c9d4c901ad0decf1177966490ab2d >Result: >[kubeexec] DEBUG 2018/06/08 08:35:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0a21bd3dabbed98904d8c343a304aa58 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_924c9d4c901ad0decf1177966490ab2d --virtualsize 10485760K --name brick_924c9d4c901ad0decf1177966490ab2d >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_924c9d4c901ad0decf1177966490ab2d" created. >[kubeexec] DEBUG 2018/06/08 08:35:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_3075e0705c078c9e77f2395371953343 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_924c9d4c901ad0decf1177966490ab2d >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_924c9d4c901ad0decf1177966490ab2d isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:35:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_924c9d4c901ad0decf1177966490ab2d /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_924c9d4c901ad0decf1177966490ab2d xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:35:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_0a21bd3dabbed98904d8c343a304aa58 --virtualsize 10485760K --name brick_0a21bd3dabbed98904d8c343a304aa58 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_0a21bd3dabbed98904d8c343a304aa58" created. >[kubeexec] DEBUG 2018/06/08 08:35:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_0a21bd3dabbed98904d8c343a304aa58 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_0a21bd3dabbed98904d8c343a304aa58 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:35:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_3075e0705c078c9e77f2395371953343 --virtualsize 10485760K --name brick_3075e0705c078c9e77f2395371953343 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_3075e0705c078c9e77f2395371953343" created. >[kubeexec] DEBUG 2018/06/08 08:35:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_924c9d4c901ad0decf1177966490ab2d /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_924c9d4c901ad0decf1177966490ab2d >Result: >[kubeexec] DEBUG 2018/06/08 08:35:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_0a21bd3dabbed98904d8c343a304aa58 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0a21bd3dabbed98904d8c343a304aa58 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:35:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_924c9d4c901ad0decf1177966490ab2d/brick >Result: >[negroni] Started GET /queue/c2357b5efc1fbcc9cef37b416ff6b7bc >[negroni] Completed 200 OK in 127.317µs >[kubeexec] DEBUG 2018/06/08 08:35:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_3075e0705c078c9e77f2395371953343 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_3075e0705c078c9e77f2395371953343 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:35:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_0a21bd3dabbed98904d8c343a304aa58 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0a21bd3dabbed98904d8c343a304aa58 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_3075e0705c078c9e77f2395371953343 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_3075e0705c078c9e77f2395371953343 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:35:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0a21bd3dabbed98904d8c343a304aa58/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:35:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_3075e0705c078c9e77f2395371953343 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_3075e0705c078c9e77f2395371953343 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_3075e0705c078c9e77f2395371953343/brick >Result: >[cmdexec] INFO 2018/06/08 08:35:16 Creating volume vol_f08c8978937a9971fb637b34f333d318 replica 3 >[kubeexec] DEBUG 2018/06/08 08:35:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_f08c8978937a9971fb637b34f333d318 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0a21bd3dabbed98904d8c343a304aa58/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_3075e0705c078c9e77f2395371953343/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_924c9d4c901ad0decf1177966490ab2d/brick >Result: volume create: vol_f08c8978937a9971fb637b34f333d318: success: please start the volume to access data >[negroni] Started GET /queue/c2357b5efc1fbcc9cef37b416ff6b7bc >[negroni] Completed 200 OK in 129.781µs >[negroni] Started GET /queue/c2357b5efc1fbcc9cef37b416ff6b7bc >[negroni] Completed 200 OK in 127.703µs >[negroni] Started GET /queue/c2357b5efc1fbcc9cef37b416ff6b7bc >[negroni] Completed 200 OK in 140.882µs >[negroni] Started GET /queue/c2357b5efc1fbcc9cef37b416ff6b7bc >[negroni] Completed 200 OK in 203.54µs >[kubeexec] DEBUG 2018/06/08 08:35:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_f08c8978937a9971fb637b34f333d318 >Result: volume start: vol_f08c8978937a9971fb637b34f333d318: success >[heketi] INFO 2018/06/08 08:35:21 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:35:21 asynchttp.go:292: Completed job c2357b5efc1fbcc9cef37b416ff6b7bc in 5.808592133s >[negroni] Started GET /queue/c2357b5efc1fbcc9cef37b416ff6b7bc >[negroni] Completed 303 See Other in 234.518µs >[negroni] Started GET /volumes/f08c8978937a9971fb637b34f333d318 >[negroni] Completed 200 OK in 3.374637ms >[negroni] Started DELETE /volumes/f08c8978937a9971fb637b34f333d318 >[negroni] Completed 202 Accepted in 9.719209ms >[asynchttp] INFO 2018/06/08 08:35:22 asynchttp.go:288: Started job 2b9ed3c45ef0a2c1d3e3e0fa5c33c832 >[heketi] INFO 2018/06/08 08:35:22 Started async operation: Delete Volume >[negroni] Started GET /queue/2b9ed3c45ef0a2c1d3e3e0fa5c33c832 >[negroni] Completed 200 OK in 112.446µs >[kubeexec] DEBUG 2018/06/08 08:35:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_f08c8978937a9971fb637b34f333d318 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/2b9ed3c45ef0a2c1d3e3e0fa5c33c832 >[negroni] Completed 200 OK in 229.661µs >[negroni] Started GET /queue/2b9ed3c45ef0a2c1d3e3e0fa5c33c832 >[negroni] Completed 200 OK in 229.361µs >[kubeexec] DEBUG 2018/06/08 08:35:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume stop vol_f08c8978937a9971fb637b34f333d318 force >Result: volume stop: vol_f08c8978937a9971fb637b34f333d318: success >[kubeexec] DEBUG 2018/06/08 08:35:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume delete vol_f08c8978937a9971fb637b34f333d318 >Result: volume delete: vol_f08c8978937a9971fb637b34f333d318: success >[heketi] INFO 2018/06/08 08:35:25 Deleting brick 0a21bd3dabbed98904d8c343a304aa58 >[heketi] INFO 2018/06/08 08:35:25 Deleting brick 3075e0705c078c9e77f2395371953343 >[heketi] INFO 2018/06/08 08:35:25 Deleting brick 924c9d4c901ad0decf1177966490ab2d >[kubeexec] DEBUG 2018/06/08 08:35:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_924c9d4c901ad0decf1177966490ab2d | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_924c9d4c901ad0decf1177966490ab2d >[kubeexec] DEBUG 2018/06/08 08:35:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0a21bd3dabbed98904d8c343a304aa58 | cut -d" " -f1 >Result: /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_0a21bd3dabbed98904d8c343a304aa58 >[kubeexec] DEBUG 2018/06/08 08:35:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_3075e0705c078c9e77f2395371953343 | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_3075e0705c078c9e77f2395371953343 >[kubeexec] DEBUG 2018/06/08 08:35:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_0a21bd3dabbed98904d8c343a304aa58 > >Result: vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_0a21bd3dabbed98904d8c343a304aa58 >[kubeexec] DEBUG 2018/06/08 08:35:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_924c9d4c901ad0decf1177966490ab2d > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_924c9d4c901ad0decf1177966490ab2d >[negroni] Started GET /queue/2b9ed3c45ef0a2c1d3e3e0fa5c33c832 >[negroni] Completed 200 OK in 208.912µs >[kubeexec] DEBUG 2018/06/08 08:35:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_3075e0705c078c9e77f2395371953343 > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_3075e0705c078c9e77f2395371953343 >[kubeexec] DEBUG 2018/06/08 08:35:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0a21bd3dabbed98904d8c343a304aa58 >Result: >[negroni] Started GET /queue/2b9ed3c45ef0a2c1d3e3e0fa5c33c832 >[negroni] Completed 200 OK in 170.333µs >[kubeexec] DEBUG 2018/06/08 08:35:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_924c9d4c901ad0decf1177966490ab2d >Result: >[kubeexec] DEBUG 2018/06/08 08:35:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_3075e0705c078c9e77f2395371953343 >Result: >[negroni] Started GET /queue/2b9ed3c45ef0a2c1d3e3e0fa5c33c832 >[negroni] Completed 200 OK in 180.986µs >[kubeexec] DEBUG 2018/06/08 08:35:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_0a21bd3dabbed98904d8c343a304aa58/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:35:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_924c9d4c901ad0decf1177966490ab2d/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/2b9ed3c45ef0a2c1d3e3e0fa5c33c832 >[negroni] Completed 200 OK in 229.308µs >[kubeexec] DEBUG 2018/06/08 08:35:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_3075e0705c078c9e77f2395371953343/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:35:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_0a21bd3dabbed98904d8c343a304aa58 > >Result: Logical volume "brick_0a21bd3dabbed98904d8c343a304aa58" successfully removed >[kubeexec] DEBUG 2018/06/08 08:35:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_924c9d4c901ad0decf1177966490ab2d > >Result: Logical volume "brick_924c9d4c901ad0decf1177966490ab2d" successfully removed >[negroni] Started GET /queue/2b9ed3c45ef0a2c1d3e3e0fa5c33c832 >[negroni] Completed 200 OK in 214.255µs >[kubeexec] DEBUG 2018/06/08 08:35:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_3075e0705c078c9e77f2395371953343 > >Result: Logical volume "brick_3075e0705c078c9e77f2395371953343" successfully removed >[kubeexec] DEBUG 2018/06/08 08:35:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_0a21bd3dabbed98904d8c343a304aa58 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:35:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_924c9d4c901ad0decf1177966490ab2d > >Result: 0 >[negroni] Started GET /queue/2b9ed3c45ef0a2c1d3e3e0fa5c33c832 >[negroni] Completed 200 OK in 163.729µs >[kubeexec] DEBUG 2018/06/08 08:35:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_3075e0705c078c9e77f2395371953343 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:35:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_0a21bd3dabbed98904d8c343a304aa58 > >Result: Logical volume "tp_0a21bd3dabbed98904d8c343a304aa58" successfully removed >[kubeexec] DEBUG 2018/06/08 08:35:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_924c9d4c901ad0decf1177966490ab2d > >Result: Logical volume "tp_924c9d4c901ad0decf1177966490ab2d" successfully removed >[negroni] Started GET /queue/2b9ed3c45ef0a2c1d3e3e0fa5c33c832 >[negroni] Completed 200 OK in 141.089µs >[kubeexec] DEBUG 2018/06/08 08:35:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_3075e0705c078c9e77f2395371953343 > >Result: Logical volume "tp_3075e0705c078c9e77f2395371953343" successfully removed >[kubeexec] DEBUG 2018/06/08 08:35:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_0a21bd3dabbed98904d8c343a304aa58 >Result: >[negroni] Started GET /queue/2b9ed3c45ef0a2c1d3e3e0fa5c33c832 >[negroni] Completed 200 OK in 174.937µs >[kubeexec] DEBUG 2018/06/08 08:35:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_924c9d4c901ad0decf1177966490ab2d >Result: >[kubeexec] DEBUG 2018/06/08 08:35:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_3075e0705c078c9e77f2395371953343 >Result: >[heketi] INFO 2018/06/08 08:35:33 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:35:33 asynchttp.go:292: Completed job 2b9ed3c45ef0a2c1d3e3e0fa5c33c832 in 10.544537747s >[negroni] Started GET /queue/2b9ed3c45ef0a2c1d3e3e0fa5c33c832 >[negroni] Completed 204 No Content in 167.211µs >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:35:33 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:35:33 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:35:33 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:35:33 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:35:33 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:35:33 Allocating brick set #2 >[heketi] INFO 2018/06/08 08:35:33 Allocating brick set #3 >[heketi] INFO 2018/06/08 08:35:33 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:35:33 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:35:33 Allocating brick set #2 >[heketi] INFO 2018/06/08 08:35:33 Allocating brick set #3 >[heketi] INFO 2018/06/08 08:35:33 Allocating brick set #4 >[heketi] INFO 2018/06/08 08:35:33 Allocating brick set #5 >[heketi] INFO 2018/06/08 08:35:33 Allocating brick set #6 >[heketi] INFO 2018/06/08 08:35:33 Allocating brick set #7 >[heketi] ERROR 2018/06/08 08:35:33 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1167: Create Volume Build Failed: No space >[negroni] Completed 500 Internal Server Error in 21.13287ms >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:35:34 Allocating brick set #0 >[negroni] Completed 202 Accepted in 11.697203ms >[asynchttp] INFO 2018/06/08 08:35:34 asynchttp.go:288: Started job 70dc6ad014f42c11e49962509fcef1b7 >[heketi] INFO 2018/06/08 08:35:34 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:35:34 Creating brick 6a50e9b2961a5cd8deafffc0f1e7d401 >[heketi] INFO 2018/06/08 08:35:34 Creating brick 42b57e50453bebac4a1d2fdcd70c3418 >[heketi] INFO 2018/06/08 08:35:34 Creating brick 3b99cfab90966e2c5ccb9ef5ab1cfa1d >[negroni] Started GET /queue/70dc6ad014f42c11e49962509fcef1b7 >[negroni] Completed 200 OK in 98.431µs >[kubeexec] DEBUG 2018/06/08 08:35:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6a50e9b2961a5cd8deafffc0f1e7d401 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d >Result: >[kubeexec] DEBUG 2018/06/08 08:35:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_42b57e50453bebac4a1d2fdcd70c3418 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_6a50e9b2961a5cd8deafffc0f1e7d401 --virtualsize 10485760K --name brick_6a50e9b2961a5cd8deafffc0f1e7d401 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_6a50e9b2961a5cd8deafffc0f1e7d401" created. >[kubeexec] DEBUG 2018/06/08 08:35:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_3b99cfab90966e2c5ccb9ef5ab1cfa1d --virtualsize 10485760K --name brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d" created. >[kubeexec] DEBUG 2018/06/08 08:35:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_42b57e50453bebac4a1d2fdcd70c3418 --virtualsize 10485760K --name brick_42b57e50453bebac4a1d2fdcd70c3418 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_42b57e50453bebac4a1d2fdcd70c3418" created. >[kubeexec] DEBUG 2018/06/08 08:35:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_6a50e9b2961a5cd8deafffc0f1e7d401 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_6a50e9b2961a5cd8deafffc0f1e7d401 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:35:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:35:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_42b57e50453bebac4a1d2fdcd70c3418 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_42b57e50453bebac4a1d2fdcd70c3418 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:35:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_6a50e9b2961a5cd8deafffc0f1e7d401 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6a50e9b2961a5cd8deafffc0f1e7d401 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:35:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:35:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_42b57e50453bebac4a1d2fdcd70c3418 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_42b57e50453bebac4a1d2fdcd70c3418 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:35:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d >Result: >[kubeexec] DEBUG 2018/06/08 08:35:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_6a50e9b2961a5cd8deafffc0f1e7d401 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6a50e9b2961a5cd8deafffc0f1e7d401 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_42b57e50453bebac4a1d2fdcd70c3418 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_42b57e50453bebac4a1d2fdcd70c3418 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:35:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6a50e9b2961a5cd8deafffc0f1e7d401/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:35:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_42b57e50453bebac4a1d2fdcd70c3418/brick >Result: >[cmdexec] INFO 2018/06/08 08:35:35 Creating volume vol_4a5dacb31fced8db04ae8887e9b873ac replica 3 >[negroni] Started GET /queue/70dc6ad014f42c11e49962509fcef1b7 >[negroni] Completed 200 OK in 139.572µs >[kubeexec] DEBUG 2018/06/08 08:35:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_4a5dacb31fced8db04ae8887e9b873ac replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6a50e9b2961a5cd8deafffc0f1e7d401/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_42b57e50453bebac4a1d2fdcd70c3418/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d/brick >Result: volume create: vol_4a5dacb31fced8db04ae8887e9b873ac: success: please start the volume to access data >[negroni] Started GET /queue/70dc6ad014f42c11e49962509fcef1b7 >[negroni] Completed 200 OK in 141.181µs >[negroni] Started GET /queue/70dc6ad014f42c11e49962509fcef1b7 >[negroni] Completed 200 OK in 140.438µs >[kubeexec] DEBUG 2018/06/08 08:35:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_4a5dacb31fced8db04ae8887e9b873ac >Result: volume start: vol_4a5dacb31fced8db04ae8887e9b873ac: success >[heketi] INFO 2018/06/08 08:35:37 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:35:37 asynchttp.go:292: Completed job 70dc6ad014f42c11e49962509fcef1b7 in 3.457453444s >[negroni] Started GET /queue/70dc6ad014f42c11e49962509fcef1b7 >[negroni] Completed 303 See Other in 213.513µs >[negroni] Started GET /volumes/4a5dacb31fced8db04ae8887e9b873ac >[negroni] Completed 200 OK in 3.291239ms >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 146.999µs >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 315.345µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 1.563342ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 941.912µs >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 1.424225ms >[negroni] Started DELETE /volumes/4a5dacb31fced8db04ae8887e9b873ac >[negroni] Completed 202 Accepted in 11.916142ms >[asynchttp] INFO 2018/06/08 08:35:39 asynchttp.go:288: Started job 43057859a006c8ad74e7f5bf974ebf21 >[heketi] INFO 2018/06/08 08:35:39 Started async operation: Delete Volume >[negroni] Started GET /queue/43057859a006c8ad74e7f5bf974ebf21 >[negroni] Completed 200 OK in 137.452µs >[kubeexec] DEBUG 2018/06/08 08:35:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script snapshot list vol_4a5dacb31fced8db04ae8887e9b873ac --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/43057859a006c8ad74e7f5bf974ebf21 >[negroni] Completed 200 OK in 165.247µs >[negroni] Started GET /queue/43057859a006c8ad74e7f5bf974ebf21 >[negroni] Completed 200 OK in 135.016µs >[negroni] Started GET /queue/43057859a006c8ad74e7f5bf974ebf21 >[negroni] Completed 200 OK in 210.597µs >[kubeexec] DEBUG 2018/06/08 08:35:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume stop vol_4a5dacb31fced8db04ae8887e9b873ac force >Result: volume stop: vol_4a5dacb31fced8db04ae8887e9b873ac: success >[kubeexec] DEBUG 2018/06/08 08:35:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume delete vol_4a5dacb31fced8db04ae8887e9b873ac >Result: volume delete: vol_4a5dacb31fced8db04ae8887e9b873ac: success >[heketi] INFO 2018/06/08 08:35:42 Deleting brick 42b57e50453bebac4a1d2fdcd70c3418 >[heketi] INFO 2018/06/08 08:35:42 Deleting brick 3b99cfab90966e2c5ccb9ef5ab1cfa1d >[heketi] INFO 2018/06/08 08:35:42 Deleting brick 6a50e9b2961a5cd8deafffc0f1e7d401 >[kubeexec] DEBUG 2018/06/08 08:35:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6a50e9b2961a5cd8deafffc0f1e7d401 | cut -d" " -f1 >Result: /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_6a50e9b2961a5cd8deafffc0f1e7d401 >[kubeexec] DEBUG 2018/06/08 08:35:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_42b57e50453bebac4a1d2fdcd70c3418 | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_42b57e50453bebac4a1d2fdcd70c3418 >[kubeexec] DEBUG 2018/06/08 08:35:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d >[kubeexec] DEBUG 2018/06/08 08:35:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_6a50e9b2961a5cd8deafffc0f1e7d401 > >Result: vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_6a50e9b2961a5cd8deafffc0f1e7d401 >[negroni] Started GET /queue/43057859a006c8ad74e7f5bf974ebf21 >[negroni] Completed 200 OK in 138.076µs >[kubeexec] DEBUG 2018/06/08 08:35:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_42b57e50453bebac4a1d2fdcd70c3418 > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_42b57e50453bebac4a1d2fdcd70c3418 >[kubeexec] DEBUG 2018/06/08 08:35:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_3b99cfab90966e2c5ccb9ef5ab1cfa1d >[negroni] Started GET /queue/43057859a006c8ad74e7f5bf974ebf21 >[negroni] Completed 200 OK in 113.015µs >[kubeexec] DEBUG 2018/06/08 08:35:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6a50e9b2961a5cd8deafffc0f1e7d401 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_42b57e50453bebac4a1d2fdcd70c3418 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d >Result: >[negroni] Started GET /queue/43057859a006c8ad74e7f5bf974ebf21 >[negroni] Completed 200 OK in 147.562µs >[kubeexec] DEBUG 2018/06/08 08:35:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_6a50e9b2961a5cd8deafffc0f1e7d401/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:35:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_42b57e50453bebac4a1d2fdcd70c3418/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:35:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/43057859a006c8ad74e7f5bf974ebf21 >[negroni] Completed 200 OK in 158.668µs >[kubeexec] DEBUG 2018/06/08 08:35:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_6a50e9b2961a5cd8deafffc0f1e7d401 > >Result: Logical volume "brick_6a50e9b2961a5cd8deafffc0f1e7d401" successfully removed >[kubeexec] DEBUG 2018/06/08 08:35:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_42b57e50453bebac4a1d2fdcd70c3418 > >Result: Logical volume "brick_42b57e50453bebac4a1d2fdcd70c3418" successfully removed >[kubeexec] DEBUG 2018/06/08 08:35:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d > >Result: Logical volume "brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d" successfully removed >[negroni] Started GET /queue/43057859a006c8ad74e7f5bf974ebf21 >[negroni] Completed 200 OK in 180.712µs >[kubeexec] DEBUG 2018/06/08 08:35:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_6a50e9b2961a5cd8deafffc0f1e7d401 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:35:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_42b57e50453bebac4a1d2fdcd70c3418 > >Result: 0 >[negroni] Started GET /queue/43057859a006c8ad74e7f5bf974ebf21 >[negroni] Completed 200 OK in 204.419µs >[kubeexec] DEBUG 2018/06/08 08:35:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_3b99cfab90966e2c5ccb9ef5ab1cfa1d > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:35:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_6a50e9b2961a5cd8deafffc0f1e7d401 > >Result: Logical volume "tp_6a50e9b2961a5cd8deafffc0f1e7d401" successfully removed >[negroni] Started GET /queue/43057859a006c8ad74e7f5bf974ebf21 >[negroni] Completed 200 OK in 218.835µs >[kubeexec] DEBUG 2018/06/08 08:35:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_42b57e50453bebac4a1d2fdcd70c3418 > >Result: Logical volume "tp_42b57e50453bebac4a1d2fdcd70c3418" successfully removed >[kubeexec] DEBUG 2018/06/08 08:35:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_3b99cfab90966e2c5ccb9ef5ab1cfa1d > >Result: Logical volume "tp_3b99cfab90966e2c5ccb9ef5ab1cfa1d" successfully removed >[negroni] Started GET /queue/43057859a006c8ad74e7f5bf974ebf21 >[negroni] Completed 200 OK in 220.978µs >[kubeexec] DEBUG 2018/06/08 08:35:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6a50e9b2961a5cd8deafffc0f1e7d401 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_42b57e50453bebac4a1d2fdcd70c3418 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_3b99cfab90966e2c5ccb9ef5ab1cfa1d >Result: >[heketi] INFO 2018/06/08 08:35:50 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:35:50 asynchttp.go:292: Completed job 43057859a006c8ad74e7f5bf974ebf21 in 11.548980199s >[negroni] Started GET /queue/43057859a006c8ad74e7f5bf974ebf21 >[negroni] Completed 204 No Content in 196.151µs >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 2.250353ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 372.524µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 1.364688ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.494786ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 1.483396ms >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:35:51 Allocating brick set #0 >[negroni] Completed 202 Accepted in 12.952701ms >[asynchttp] INFO 2018/06/08 08:35:51 asynchttp.go:288: Started job a9ac68956dfb4c06cf5ad2dc5bbcaae8 >[heketi] INFO 2018/06/08 08:35:51 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:35:51 Creating brick 40cd1137d297de071f2da2a433be6fd7 >[heketi] INFO 2018/06/08 08:35:51 Creating brick 89391ff92f83c755ffb9c270e0f6f3c9 >[heketi] INFO 2018/06/08 08:35:51 Creating brick 16e82f0f7d6e040618ce84658b250ceb >[negroni] Started GET /queue/a9ac68956dfb4c06cf5ad2dc5bbcaae8 >[negroni] Completed 200 OK in 102.487µs >[kubeexec] DEBUG 2018/06/08 08:35:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_89391ff92f83c755ffb9c270e0f6f3c9 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_40cd1137d297de071f2da2a433be6fd7 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_16e82f0f7d6e040618ce84658b250ceb >Result: >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_89391ff92f83c755ffb9c270e0f6f3c9 --virtualsize 10485760K --name brick_89391ff92f83c755ffb9c270e0f6f3c9 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_89391ff92f83c755ffb9c270e0f6f3c9" created. >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_40cd1137d297de071f2da2a433be6fd7 --virtualsize 10485760K --name brick_40cd1137d297de071f2da2a433be6fd7 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_40cd1137d297de071f2da2a433be6fd7" created. >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_16e82f0f7d6e040618ce84658b250ceb --virtualsize 10485760K --name brick_16e82f0f7d6e040618ce84658b250ceb >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_16e82f0f7d6e040618ce84658b250ceb" created. >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_89391ff92f83c755ffb9c270e0f6f3c9 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_89391ff92f83c755ffb9c270e0f6f3c9 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_40cd1137d297de071f2da2a433be6fd7 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_40cd1137d297de071f2da2a433be6fd7 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_16e82f0f7d6e040618ce84658b250ceb >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_16e82f0f7d6e040618ce84658b250ceb isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_89391ff92f83c755ffb9c270e0f6f3c9 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_89391ff92f83c755ffb9c270e0f6f3c9 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_40cd1137d297de071f2da2a433be6fd7 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_40cd1137d297de071f2da2a433be6fd7 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_16e82f0f7d6e040618ce84658b250ceb /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_16e82f0f7d6e040618ce84658b250ceb xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_89391ff92f83c755ffb9c270e0f6f3c9 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_89391ff92f83c755ffb9c270e0f6f3c9 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_40cd1137d297de071f2da2a433be6fd7 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_40cd1137d297de071f2da2a433be6fd7 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_16e82f0f7d6e040618ce84658b250ceb /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_16e82f0f7d6e040618ce84658b250ceb >Result: >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_89391ff92f83c755ffb9c270e0f6f3c9/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_16e82f0f7d6e040618ce84658b250ceb/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_40cd1137d297de071f2da2a433be6fd7/brick >Result: >[cmdexec] INFO 2018/06/08 08:35:52 Creating volume vol_27b4e68c05a8cded57c197df819b2c12 replica 3 >[negroni] Started GET /queue/a9ac68956dfb4c06cf5ad2dc5bbcaae8 >[negroni] Completed 200 OK in 138.108µs >[kubeexec] DEBUG 2018/06/08 08:35:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_27b4e68c05a8cded57c197df819b2c12 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_40cd1137d297de071f2da2a433be6fd7/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_16e82f0f7d6e040618ce84658b250ceb/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_89391ff92f83c755ffb9c270e0f6f3c9/brick >Result: volume create: vol_27b4e68c05a8cded57c197df819b2c12: success: please start the volume to access data >[negroni] Started GET /queue/a9ac68956dfb4c06cf5ad2dc5bbcaae8 >[negroni] Completed 200 OK in 136.92µs >[negroni] Started GET /queue/a9ac68956dfb4c06cf5ad2dc5bbcaae8 >[negroni] Completed 200 OK in 132.062µs >[negroni] Started GET /queue/a9ac68956dfb4c06cf5ad2dc5bbcaae8 >[negroni] Completed 200 OK in 147.524µs >[negroni] Started GET /queue/a9ac68956dfb4c06cf5ad2dc5bbcaae8 >[negroni] Completed 200 OK in 324.84µs >[kubeexec] DEBUG 2018/06/08 08:35:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_27b4e68c05a8cded57c197df819b2c12 >Result: volume start: vol_27b4e68c05a8cded57c197df819b2c12: success >[heketi] INFO 2018/06/08 08:35:57 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:35:57 asynchttp.go:292: Completed job a9ac68956dfb4c06cf5ad2dc5bbcaae8 in 5.417938255s >[negroni] Started GET /queue/a9ac68956dfb4c06cf5ad2dc5bbcaae8 >[negroni] Completed 303 See Other in 201.089µs >[negroni] Started GET /volumes/27b4e68c05a8cded57c197df819b2c12 >[negroni] Completed 200 OK in 3.059591ms >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:35:57 Allocating brick set #0 >[negroni] Completed 202 Accepted in 15.237599ms >[asynchttp] INFO 2018/06/08 08:35:57 asynchttp.go:288: Started job 138190be1330787bd37eed0a371f160f >[heketi] INFO 2018/06/08 08:35:57 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:35:57 Creating brick b5d1434de12b984177d8fe3865f88829 >[heketi] INFO 2018/06/08 08:35:57 Creating brick 3f6e5bae8b9cfcddcc7d0f9b8ded99fc >[heketi] INFO 2018/06/08 08:35:57 Creating brick cde06dca54580b41329bf54d8306df4c >[negroni] Started GET /queue/138190be1330787bd37eed0a371f160f >[negroni] Completed 200 OK in 315.206µs >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_cde06dca54580b41329bf54d8306df4c >Result: >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc >Result: >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_b5d1434de12b984177d8fe3865f88829 >Result: >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_3a4297677881963e3f80124971d50eea/tp_3f6e5bae8b9cfcddcc7d0f9b8ded99fc --virtualsize 10485760K --name brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc" created. >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_b5d1434de12b984177d8fe3865f88829 --virtualsize 10485760K --name brick_b5d1434de12b984177d8fe3865f88829 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_b5d1434de12b984177d8fe3865f88829" created. >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_d389f0278a774bd7443a09af960961d8/tp_cde06dca54580b41329bf54d8306df4c --virtualsize 10485760K --name brick_cde06dca54580b41329bf54d8306df4c >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_cde06dca54580b41329bf54d8306df4c" created. >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_b5d1434de12b984177d8fe3865f88829 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_b5d1434de12b984177d8fe3865f88829 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_cde06dca54580b41329bf54d8306df4c >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_cde06dca54580b41329bf54d8306df4c isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_b5d1434de12b984177d8fe3865f88829 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_b5d1434de12b984177d8fe3865f88829 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_cde06dca54580b41329bf54d8306df4c /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_cde06dca54580b41329bf54d8306df4c xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc >Result: >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_cde06dca54580b41329bf54d8306df4c /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_cde06dca54580b41329bf54d8306df4c >Result: >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_b5d1434de12b984177d8fe3865f88829 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_b5d1434de12b984177d8fe3865f88829 >Result: >[negroni] Started GET /queue/138190be1330787bd37eed0a371f160f >[negroni] Completed 200 OK in 144.072µs >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_cde06dca54580b41329bf54d8306df4c/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:35:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_b5d1434de12b984177d8fe3865f88829/brick >Result: >[cmdexec] INFO 2018/06/08 08:35:58 Creating volume vol_b1f56e511cf62d1961997fb656989ed4 replica 3 >[kubeexec] DEBUG 2018/06/08 08:35:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_b1f56e511cf62d1961997fb656989ed4 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_b5d1434de12b984177d8fe3865f88829/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_cde06dca54580b41329bf54d8306df4c/brick >Result: volume create: vol_b1f56e511cf62d1961997fb656989ed4: success: please start the volume to access data >[negroni] Started GET /queue/138190be1330787bd37eed0a371f160f >[negroni] Completed 200 OK in 147.726µs >[negroni] Started GET /queue/138190be1330787bd37eed0a371f160f >[negroni] Completed 200 OK in 158.24µs >[negroni] Started GET /queue/138190be1330787bd37eed0a371f160f >[negroni] Completed 200 OK in 133.548µs >[negroni] Started GET /queue/138190be1330787bd37eed0a371f160f >[negroni] Completed 200 OK in 317.474µs >[kubeexec] DEBUG 2018/06/08 08:36:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_b1f56e511cf62d1961997fb656989ed4 >Result: volume start: vol_b1f56e511cf62d1961997fb656989ed4: success >[heketi] INFO 2018/06/08 08:36:03 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:36:03 asynchttp.go:292: Completed job 138190be1330787bd37eed0a371f160f in 5.593359534s >[negroni] Started GET /queue/138190be1330787bd37eed0a371f160f >[negroni] Completed 303 See Other in 170.532µs >[negroni] Started GET /volumes/b1f56e511cf62d1961997fb656989ed4 >[negroni] Completed 200 OK in 2.948489ms >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 692.594µs >[negroni] Started GET /volumes/27b4e68c05a8cded57c197df819b2c12 >[negroni] Completed 200 OK in 995.066µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 761.914µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 750.739µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 743.664µs >[negroni] Started DELETE /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 409 Conflict in 539.611µs >[negroni] Started GET /volumes/b1f56e511cf62d1961997fb656989ed4 >[negroni] Completed 200 OK in 1.111841ms >[negroni] Started DELETE /volumes/27b4e68c05a8cded57c197df819b2c12 >[negroni] Completed 202 Accepted in 8.486765ms >[asynchttp] INFO 2018/06/08 08:36:04 asynchttp.go:288: Started job 569c6cdffc1d7dc3a52c7f47162eb949 >[heketi] INFO 2018/06/08 08:36:04 Started async operation: Delete Volume >[negroni] Started GET /queue/569c6cdffc1d7dc3a52c7f47162eb949 >[negroni] Completed 200 OK in 138.961µs >[kubeexec] DEBUG 2018/06/08 08:36:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_27b4e68c05a8cded57c197df819b2c12 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/569c6cdffc1d7dc3a52c7f47162eb949 >[negroni] Completed 200 OK in 201.286µs >[negroni] Started GET /queue/569c6cdffc1d7dc3a52c7f47162eb949 >[negroni] Completed 200 OK in 164.305µs >[kubeexec] DEBUG 2018/06/08 08:36:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume stop vol_27b4e68c05a8cded57c197df819b2c12 force >Result: volume stop: vol_27b4e68c05a8cded57c197df819b2c12: success >[kubeexec] DEBUG 2018/06/08 08:36:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume delete vol_27b4e68c05a8cded57c197df819b2c12 >Result: volume delete: vol_27b4e68c05a8cded57c197df819b2c12: success >[heketi] INFO 2018/06/08 08:36:07 Deleting brick 16e82f0f7d6e040618ce84658b250ceb >[heketi] INFO 2018/06/08 08:36:07 Deleting brick 89391ff92f83c755ffb9c270e0f6f3c9 >[heketi] INFO 2018/06/08 08:36:07 Deleting brick 40cd1137d297de071f2da2a433be6fd7 >[kubeexec] DEBUG 2018/06/08 08:36:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_40cd1137d297de071f2da2a433be6fd7 | cut -d" " -f1 >Result: /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_40cd1137d297de071f2da2a433be6fd7 >[kubeexec] DEBUG 2018/06/08 08:36:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_16e82f0f7d6e040618ce84658b250ceb | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_16e82f0f7d6e040618ce84658b250ceb >[kubeexec] DEBUG 2018/06/08 08:36:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_89391ff92f83c755ffb9c270e0f6f3c9 | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_89391ff92f83c755ffb9c270e0f6f3c9 >[kubeexec] DEBUG 2018/06/08 08:36:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_40cd1137d297de071f2da2a433be6fd7 > >Result: vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_40cd1137d297de071f2da2a433be6fd7 >[kubeexec] DEBUG 2018/06/08 08:36:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_16e82f0f7d6e040618ce84658b250ceb > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_16e82f0f7d6e040618ce84658b250ceb >[negroni] Started GET /queue/569c6cdffc1d7dc3a52c7f47162eb949 >[negroni] Completed 200 OK in 132.66µs >[kubeexec] DEBUG 2018/06/08 08:36:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_89391ff92f83c755ffb9c270e0f6f3c9 > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_89391ff92f83c755ffb9c270e0f6f3c9 >[negroni] Started GET /queue/569c6cdffc1d7dc3a52c7f47162eb949 >[negroni] Completed 200 OK in 142.032µs >[kubeexec] DEBUG 2018/06/08 08:36:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_40cd1137d297de071f2da2a433be6fd7 >Result: >[kubeexec] DEBUG 2018/06/08 08:36:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_16e82f0f7d6e040618ce84658b250ceb >Result: >[kubeexec] DEBUG 2018/06/08 08:36:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_89391ff92f83c755ffb9c270e0f6f3c9 >Result: >[negroni] Started GET /queue/569c6cdffc1d7dc3a52c7f47162eb949 >[negroni] Completed 200 OK in 159.218µs >[kubeexec] DEBUG 2018/06/08 08:36:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_40cd1137d297de071f2da2a433be6fd7/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:36:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_16e82f0f7d6e040618ce84658b250ceb/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:36:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_89391ff92f83c755ffb9c270e0f6f3c9/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/569c6cdffc1d7dc3a52c7f47162eb949 >[negroni] Completed 200 OK in 197.817µs >[kubeexec] DEBUG 2018/06/08 08:36:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_40cd1137d297de071f2da2a433be6fd7 > >Result: Logical volume "brick_40cd1137d297de071f2da2a433be6fd7" successfully removed >[kubeexec] DEBUG 2018/06/08 08:36:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_16e82f0f7d6e040618ce84658b250ceb > >Result: Logical volume "brick_16e82f0f7d6e040618ce84658b250ceb" successfully removed >[kubeexec] DEBUG 2018/06/08 08:36:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_89391ff92f83c755ffb9c270e0f6f3c9 > >Result: Logical volume "brick_89391ff92f83c755ffb9c270e0f6f3c9" successfully removed >[negroni] Started GET /queue/569c6cdffc1d7dc3a52c7f47162eb949 >[negroni] Completed 200 OK in 312.448µs >[kubeexec] DEBUG 2018/06/08 08:36:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_40cd1137d297de071f2da2a433be6fd7 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:36:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_16e82f0f7d6e040618ce84658b250ceb > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:36:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_89391ff92f83c755ffb9c270e0f6f3c9 > >Result: 0 >[negroni] Started GET /queue/569c6cdffc1d7dc3a52c7f47162eb949 >[negroni] Completed 200 OK in 158.543µs >[kubeexec] DEBUG 2018/06/08 08:36:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_40cd1137d297de071f2da2a433be6fd7 > >Result: Logical volume "tp_40cd1137d297de071f2da2a433be6fd7" successfully removed >[negroni] Started GET /queue/569c6cdffc1d7dc3a52c7f47162eb949 >[negroni] Completed 200 OK in 140.218µs >[kubeexec] DEBUG 2018/06/08 08:36:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_16e82f0f7d6e040618ce84658b250ceb > >Result: Logical volume "tp_16e82f0f7d6e040618ce84658b250ceb" successfully removed >[kubeexec] DEBUG 2018/06/08 08:36:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_89391ff92f83c755ffb9c270e0f6f3c9 > >Result: Logical volume "tp_89391ff92f83c755ffb9c270e0f6f3c9" successfully removed >[kubeexec] DEBUG 2018/06/08 08:36:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_40cd1137d297de071f2da2a433be6fd7 >Result: >[negroni] Started GET /queue/569c6cdffc1d7dc3a52c7f47162eb949 >[negroni] Completed 200 OK in 159.082µs >[kubeexec] DEBUG 2018/06/08 08:36:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_16e82f0f7d6e040618ce84658b250ceb >Result: >[kubeexec] DEBUG 2018/06/08 08:36:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_89391ff92f83c755ffb9c270e0f6f3c9 >Result: >[heketi] INFO 2018/06/08 08:36:15 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:36:15 asynchttp.go:292: Completed job 569c6cdffc1d7dc3a52c7f47162eb949 in 10.434244873s >[negroni] Started GET /queue/569c6cdffc1d7dc3a52c7f47162eb949 >[negroni] Completed 204 No Content in 161.738µs >[negroni] Started DELETE /volumes/b1f56e511cf62d1961997fb656989ed4 >[negroni] Completed 202 Accepted in 10.799175ms >[asynchttp] INFO 2018/06/08 08:36:15 asynchttp.go:288: Started job a2fe247c6d5feaa99ab8cc70b02d9dbd >[heketi] INFO 2018/06/08 08:36:15 Started async operation: Delete Volume >[negroni] Started GET /queue/a2fe247c6d5feaa99ab8cc70b02d9dbd >[negroni] Completed 200 OK in 143.258µs >[kubeexec] DEBUG 2018/06/08 08:36:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_b1f56e511cf62d1961997fb656989ed4 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/a2fe247c6d5feaa99ab8cc70b02d9dbd >[negroni] Completed 200 OK in 241.821µs >[negroni] Started GET /queue/a2fe247c6d5feaa99ab8cc70b02d9dbd >[negroni] Completed 200 OK in 305.617µs >[kubeexec] DEBUG 2018/06/08 08:36:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume stop vol_b1f56e511cf62d1961997fb656989ed4 force >Result: volume stop: vol_b1f56e511cf62d1961997fb656989ed4: success >[kubeexec] DEBUG 2018/06/08 08:36:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume delete vol_b1f56e511cf62d1961997fb656989ed4 >Result: volume delete: vol_b1f56e511cf62d1961997fb656989ed4: success >[heketi] INFO 2018/06/08 08:36:18 Deleting brick cde06dca54580b41329bf54d8306df4c >[heketi] INFO 2018/06/08 08:36:18 Deleting brick b5d1434de12b984177d8fe3865f88829 >[heketi] INFO 2018/06/08 08:36:18 Deleting brick 3f6e5bae8b9cfcddcc7d0f9b8ded99fc >[kubeexec] DEBUG 2018/06/08 08:36:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc >[kubeexec] DEBUG 2018/06/08 08:36:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_cde06dca54580b41329bf54d8306df4c | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_cde06dca54580b41329bf54d8306df4c >[kubeexec] DEBUG 2018/06/08 08:36:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_b5d1434de12b984177d8fe3865f88829 | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_b5d1434de12b984177d8fe3865f88829 >[negroni] Started GET /queue/a2fe247c6d5feaa99ab8cc70b02d9dbd >[negroni] Completed 200 OK in 168.658µs >[kubeexec] DEBUG 2018/06/08 08:36:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc > >Result: vg_3a4297677881963e3f80124971d50eea/tp_3f6e5bae8b9cfcddcc7d0f9b8ded99fc >[kubeexec] DEBUG 2018/06/08 08:36:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_cde06dca54580b41329bf54d8306df4c > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_cde06dca54580b41329bf54d8306df4c >[kubeexec] DEBUG 2018/06/08 08:36:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_b5d1434de12b984177d8fe3865f88829 > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_b5d1434de12b984177d8fe3865f88829 >[negroni] Started GET /queue/a2fe247c6d5feaa99ab8cc70b02d9dbd >[negroni] Completed 200 OK in 148.03µs >[kubeexec] DEBUG 2018/06/08 08:36:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc >Result: >[kubeexec] DEBUG 2018/06/08 08:36:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_cde06dca54580b41329bf54d8306df4c >Result: >[kubeexec] DEBUG 2018/06/08 08:36:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_b5d1434de12b984177d8fe3865f88829 >Result: >[negroni] Started GET /queue/a2fe247c6d5feaa99ab8cc70b02d9dbd >[negroni] Completed 200 OK in 232.815µs >[kubeexec] DEBUG 2018/06/08 08:36:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:36:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_cde06dca54580b41329bf54d8306df4c/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:36:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_b5d1434de12b984177d8fe3865f88829/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/a2fe247c6d5feaa99ab8cc70b02d9dbd >[negroni] Completed 200 OK in 178.399µs >[kubeexec] DEBUG 2018/06/08 08:36:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc > >Result: Logical volume "brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc" successfully removed >[kubeexec] DEBUG 2018/06/08 08:36:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_cde06dca54580b41329bf54d8306df4c > >Result: Logical volume "brick_cde06dca54580b41329bf54d8306df4c" successfully removed >[kubeexec] DEBUG 2018/06/08 08:36:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_b5d1434de12b984177d8fe3865f88829 > >Result: Logical volume "brick_b5d1434de12b984177d8fe3865f88829" successfully removed >[negroni] Started GET /queue/a2fe247c6d5feaa99ab8cc70b02d9dbd >[negroni] Completed 200 OK in 163.057µs >[kubeexec] DEBUG 2018/06/08 08:36:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_3f6e5bae8b9cfcddcc7d0f9b8ded99fc > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:36:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_cde06dca54580b41329bf54d8306df4c > >Result: 0 >[negroni] Started GET /queue/a2fe247c6d5feaa99ab8cc70b02d9dbd >[negroni] Completed 200 OK in 222.132µs >[kubeexec] DEBUG 2018/06/08 08:36:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_b5d1434de12b984177d8fe3865f88829 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:36:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_3f6e5bae8b9cfcddcc7d0f9b8ded99fc > >Result: Logical volume "tp_3f6e5bae8b9cfcddcc7d0f9b8ded99fc" successfully removed >[negroni] Started GET /queue/a2fe247c6d5feaa99ab8cc70b02d9dbd >[negroni] Completed 200 OK in 151.931µs >[kubeexec] DEBUG 2018/06/08 08:36:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_cde06dca54580b41329bf54d8306df4c > >Result: Logical volume "tp_cde06dca54580b41329bf54d8306df4c" successfully removed >[kubeexec] DEBUG 2018/06/08 08:36:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_b5d1434de12b984177d8fe3865f88829 > >Result: Logical volume "tp_b5d1434de12b984177d8fe3865f88829" successfully removed >[negroni] Started GET /queue/a2fe247c6d5feaa99ab8cc70b02d9dbd >[negroni] Completed 200 OK in 216.438µs >[kubeexec] DEBUG 2018/06/08 08:36:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_3f6e5bae8b9cfcddcc7d0f9b8ded99fc >Result: >[kubeexec] DEBUG 2018/06/08 08:36:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_cde06dca54580b41329bf54d8306df4c >Result: >[kubeexec] DEBUG 2018/06/08 08:36:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_b5d1434de12b984177d8fe3865f88829 >Result: >[heketi] INFO 2018/06/08 08:36:26 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:36:26 asynchttp.go:292: Completed job a2fe247c6d5feaa99ab8cc70b02d9dbd in 10.496159659s >[negroni] Started GET /queue/a2fe247c6d5feaa99ab8cc70b02d9dbd >[negroni] Completed 204 No Content in 150.149µs >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 2.202189ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 472.834µs >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:36:27 Allocating brick set #0 >[negroni] Completed 202 Accepted in 12.939698ms >[asynchttp] INFO 2018/06/08 08:36:27 asynchttp.go:288: Started job 10cad4858b4d10d844dc008722f48815 >[heketi] INFO 2018/06/08 08:36:27 Started async operation: Create Volume >[negroni] Started GET /queue/10cad4858b4d10d844dc008722f48815 >[negroni] Completed 200 OK in 169.307µs >[heketi] INFO 2018/06/08 08:36:27 Creating brick 7895297323e5925bc0656d485ca1e227 >[heketi] INFO 2018/06/08 08:36:27 Creating brick 4c805bb73e44b37c40a402ffd4d57d64 >[heketi] INFO 2018/06/08 08:36:27 Creating brick 7163d990d4bfe7b8a4e7d72738f24672 >[kubeexec] DEBUG 2018/06/08 08:36:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c805bb73e44b37c40a402ffd4d57d64 >Result: >[kubeexec] DEBUG 2018/06/08 08:36:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7895297323e5925bc0656d485ca1e227 >Result: >[kubeexec] DEBUG 2018/06/08 08:36:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7163d990d4bfe7b8a4e7d72738f24672 >Result: >[kubeexec] DEBUG 2018/06/08 08:36:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 524288K --chunksize 256K --size 104857600K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_7895297323e5925bc0656d485ca1e227 --virtualsize 104857600K --name brick_7895297323e5925bc0656d485ca1e227 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_7895297323e5925bc0656d485ca1e227" created. >[kubeexec] DEBUG 2018/06/08 08:36:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 524288K --chunksize 256K --size 104857600K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_7163d990d4bfe7b8a4e7d72738f24672 --virtualsize 104857600K --name brick_7163d990d4bfe7b8a4e7d72738f24672 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_7163d990d4bfe7b8a4e7d72738f24672" created. >[kubeexec] DEBUG 2018/06/08 08:36:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 524288K --chunksize 256K --size 104857600K --thin vg_3a4297677881963e3f80124971d50eea/tp_4c805bb73e44b37c40a402ffd4d57d64 --virtualsize 104857600K --name brick_4c805bb73e44b37c40a402ffd4d57d64 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_4c805bb73e44b37c40a402ffd4d57d64" created. >[kubeexec] DEBUG 2018/06/08 08:36:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_7163d990d4bfe7b8a4e7d72738f24672 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_7163d990d4bfe7b8a4e7d72738f24672 isize=512 agcount=16, agsize=1638400 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=26214400, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=12800, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:36:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_4c805bb73e44b37c40a402ffd4d57d64 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_4c805bb73e44b37c40a402ffd4d57d64 isize=512 agcount=16, agsize=1638400 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=26214400, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=12800, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/10cad4858b4d10d844dc008722f48815 >[negroni] Completed 200 OK in 131.16µs >[kubeexec] DEBUG 2018/06/08 08:36:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_7163d990d4bfe7b8a4e7d72738f24672 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7163d990d4bfe7b8a4e7d72738f24672 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:36:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_4c805bb73e44b37c40a402ffd4d57d64 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c805bb73e44b37c40a402ffd4d57d64 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:36:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7895297323e5925bc0656d485ca1e227 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7895297323e5925bc0656d485ca1e227 isize=512 agcount=16, agsize=1638400 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=26214400, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=12800, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:36:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_7163d990d4bfe7b8a4e7d72738f24672 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7163d990d4bfe7b8a4e7d72738f24672 >Result: >[kubeexec] DEBUG 2018/06/08 08:36:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7895297323e5925bc0656d485ca1e227 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7895297323e5925bc0656d485ca1e227 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:36:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_4c805bb73e44b37c40a402ffd4d57d64 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c805bb73e44b37c40a402ffd4d57d64 >Result: >[kubeexec] DEBUG 2018/06/08 08:36:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7163d990d4bfe7b8a4e7d72738f24672/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:36:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c805bb73e44b37c40a402ffd4d57d64/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:36:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7895297323e5925bc0656d485ca1e227 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7895297323e5925bc0656d485ca1e227 >Result: >[kubeexec] DEBUG 2018/06/08 08:36:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7895297323e5925bc0656d485ca1e227/brick >Result: >[cmdexec] INFO 2018/06/08 08:36:29 Creating volume vol_09be59d82458c2dc19aded51d4f93dae replica 3 >[kubeexec] DEBUG 2018/06/08 08:36:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_09be59d82458c2dc19aded51d4f93dae replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7895297323e5925bc0656d485ca1e227/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7163d990d4bfe7b8a4e7d72738f24672/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c805bb73e44b37c40a402ffd4d57d64/brick >Result: volume create: vol_09be59d82458c2dc19aded51d4f93dae: success: please start the volume to access data >[negroni] Started GET /queue/10cad4858b4d10d844dc008722f48815 >[negroni] Completed 200 OK in 135.074µs >[negroni] Started GET /queue/10cad4858b4d10d844dc008722f48815 >[negroni] Completed 200 OK in 141.539µs >[negroni] Started GET /queue/10cad4858b4d10d844dc008722f48815 >[negroni] Completed 200 OK in 142.953µs >[kubeexec] DEBUG 2018/06/08 08:36:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_09be59d82458c2dc19aded51d4f93dae >Result: volume start: vol_09be59d82458c2dc19aded51d4f93dae: success >[heketi] INFO 2018/06/08 08:36:32 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:36:32 asynchttp.go:292: Completed job 10cad4858b4d10d844dc008722f48815 in 4.969079534s >[negroni] Started GET /queue/10cad4858b4d10d844dc008722f48815 >[negroni] Completed 303 See Other in 164.16µs >[negroni] Started GET /volumes/09be59d82458c2dc19aded51d4f93dae >[negroni] Completed 200 OK in 4.244428ms >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:36:33 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:36:33 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:36:33 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:36:33 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:36:33 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:36:33 Allocating brick set #2 >[heketi] INFO 2018/06/08 08:36:33 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:36:33 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:36:33 Allocating brick set #2 >[heketi] INFO 2018/06/08 08:36:33 Allocating brick set #3 >[heketi] INFO 2018/06/08 08:36:33 Allocating brick set #4 >[heketi] INFO 2018/06/08 08:36:33 Allocating brick set #5 >[heketi] ERROR 2018/06/08 08:36:33 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1167: Create Volume Build Failed: No space >[negroni] Completed 500 Internal Server Error in 16.456422ms >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 3.139383ms >[negroni] Started POST /devices >[heketi] INFO 2018/06/08 08:36:33 Adding device /dev/sdf to node 278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 202 Accepted in 18.135157ms >[asynchttp] INFO 2018/06/08 08:36:33 asynchttp.go:288: Started job a89144562a22950ecfca1a631b43e0e5 >[negroni] Started GET /queue/a89144562a22950ecfca1a631b43e0e5 >[negroni] Completed 200 OK in 115.114µs >[kubeexec] DEBUG 2018/06/08 08:36:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: pvcreate --metadatasize=128M --dataalignment=256K '/dev/sdf' >Result: Physical volume "/dev/sdf" successfully created. >[kubeexec] DEBUG 2018/06/08 08:36:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: vgcreate --autobackup=n vg_8d13a1486235f77a6360e21086e3d14b /dev/sdf >Result: Volume group "vg_8d13a1486235f77a6360e21086e3d14b" successfully created >[kubeexec] DEBUG 2018/06/08 08:36:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: vgdisplay -c vg_8d13a1486235f77a6360e21086e3d14b >Result: vg_8d13a1486235f77a6360e21086e3d14b:r/w:772:-1:0:0:0:-1:0:1:1:104722432:4096:25567:0:25567:23PJ5D-S5P9-ZQEJ-KBzX-EQVe-fps2-6GxOCI >[cmdexec] DEBUG 2018/06/08 08:36:33 /src/github.com/heketi/heketi/executors/cmdexec/device.go:147: Size of /dev/sdf in dhcp46-187.lab.eng.blr.redhat.com is 104722432 >[heketi] INFO 2018/06/08 08:36:33 Added device /dev/sdf >[asynchttp] INFO 2018/06/08 08:36:33 asynchttp.go:292: Completed job a89144562a22950ecfca1a631b43e0e5 in 551.429126ms >[negroni] Started GET /queue/a89144562a22950ecfca1a631b43e0e5 >[negroni] Completed 204 No Content in 153.126µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 5.417209ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 816.98µs >[negroni] Started POST /devices >[heketi] INFO 2018/06/08 08:36:34 Adding device /dev/sdf to node 70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 202 Accepted in 18.845189ms >[asynchttp] INFO 2018/06/08 08:36:34 asynchttp.go:288: Started job f0326b30b43032954e772250785ae5c6 >[negroni] Started GET /queue/f0326b30b43032954e772250785ae5c6 >[negroni] Completed 200 OK in 108.713µs >[kubeexec] DEBUG 2018/06/08 08:36:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: pvcreate --metadatasize=128M --dataalignment=256K '/dev/sdf' >Result: Physical volume "/dev/sdf" successfully created. >[kubeexec] DEBUG 2018/06/08 08:36:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: vgcreate --autobackup=n vg_e883b6f7d90329384bba2aeb2c9a34b6 /dev/sdf >Result: Volume group "vg_e883b6f7d90329384bba2aeb2c9a34b6" successfully created >[kubeexec] DEBUG 2018/06/08 08:36:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: vgdisplay -c vg_e883b6f7d90329384bba2aeb2c9a34b6 >Result: vg_e883b6f7d90329384bba2aeb2c9a34b6:r/w:772:-1:0:0:0:-1:0:1:1:104722432:4096:25567:0:25567:szmF91-22BX-RzJM-zAiK-Aqsk-Idy5-Du4dpC >[cmdexec] DEBUG 2018/06/08 08:36:35 /src/github.com/heketi/heketi/executors/cmdexec/device.go:147: Size of /dev/sdf in dhcp46-122.lab.eng.blr.redhat.com is 104722432 >[heketi] INFO 2018/06/08 08:36:35 Added device /dev/sdf >[asynchttp] INFO 2018/06/08 08:36:35 asynchttp.go:292: Completed job f0326b30b43032954e772250785ae5c6 in 462.499304ms >[negroni] Started GET /queue/f0326b30b43032954e772250785ae5c6 >[negroni] Completed 204 No Content in 151.289µs >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 7.455775ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 928.367µs >[negroni] Started POST /devices >[heketi] INFO 2018/06/08 08:36:36 Adding device /dev/sdf to node d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 202 Accepted in 12.903779ms >[asynchttp] INFO 2018/06/08 08:36:36 asynchttp.go:288: Started job e7deadfa5ef6fdfee306d153c63045c6 >[negroni] Started GET /queue/e7deadfa5ef6fdfee306d153c63045c6 >[negroni] Completed 200 OK in 177.652µs >[kubeexec] DEBUG 2018/06/08 08:36:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: pvcreate --metadatasize=128M --dataalignment=256K '/dev/sdf' >Result: Physical volume "/dev/sdf" successfully created. >[kubeexec] DEBUG 2018/06/08 08:36:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: vgcreate --autobackup=n vg_abadb4ba1e7781b57e43a25ef34a08cc /dev/sdf >Result: Volume group "vg_abadb4ba1e7781b57e43a25ef34a08cc" successfully created >[kubeexec] DEBUG 2018/06/08 08:36:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: vgdisplay -c vg_abadb4ba1e7781b57e43a25ef34a08cc >Result: vg_abadb4ba1e7781b57e43a25ef34a08cc:r/w:772:-1:0:0:0:-1:0:1:1:104722432:4096:25567:0:25567:MS7mS3-HAHh-ufnf-2cmn-lZAt-TrCi-pHR9W2 >[cmdexec] DEBUG 2018/06/08 08:36:36 /src/github.com/heketi/heketi/executors/cmdexec/device.go:147: Size of /dev/sdf in dhcp47-76.lab.eng.blr.redhat.com is 104722432 >[heketi] INFO 2018/06/08 08:36:36 Added device /dev/sdf >[asynchttp] INFO 2018/06/08 08:36:36 asynchttp.go:292: Completed job e7deadfa5ef6fdfee306d153c63045c6 in 556.314563ms >[negroni] Started GET /queue/e7deadfa5ef6fdfee306d153c63045c6 >[negroni] Completed 204 No Content in 128.048µs >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 5.778053ms >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:36:37 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:36:37 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:36:37 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:36:37 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:36:37 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:36:37 Allocating brick set #2 >[heketi] INFO 2018/06/08 08:36:37 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:36:37 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:36:37 Allocating brick set #2 >[heketi] INFO 2018/06/08 08:36:37 Allocating brick set #3 >[heketi] INFO 2018/06/08 08:36:37 Allocating brick set #4 >[heketi] INFO 2018/06/08 08:36:37 Allocating brick set #5 >[heketi] INFO 2018/06/08 08:36:37 Allocating brick set #6 >[negroni] Completed 500 Internal Server Error in 19.447586ms >[heketi] ERROR 2018/06/08 08:36:37 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1167: Create Volume Build Failed: No space >[negroni] Started POST /devices/8d13a1486235f77a6360e21086e3d14b/state >[negroni] Completed 202 Accepted in 441.559µs >[asynchttp] INFO 2018/06/08 08:36:37 asynchttp.go:288: Started job 27db2f5d4acee89343326162fa652ac3 >[negroni] Started GET /queue/27db2f5d4acee89343326162fa652ac3 >[negroni] Completed 200 OK in 135.744µs >[asynchttp] INFO 2018/06/08 08:36:37 asynchttp.go:292: Completed job 27db2f5d4acee89343326162fa652ac3 in 7.904311ms >[negroni] Started GET /queue/27db2f5d4acee89343326162fa652ac3 >[negroni] Completed 204 No Content in 139.829µs >[negroni] Started POST /devices/8d13a1486235f77a6360e21086e3d14b/state >[negroni] Completed 202 Accepted in 3.726191ms >[asynchttp] INFO 2018/06/08 08:36:38 asynchttp.go:288: Started job 7013c42e401275e0fd4a27333367afe1 >[heketi] INFO 2018/06/08 08:36:38 Running Remove Device >[negroni] Started GET /queue/7013c42e401275e0fd4a27333367afe1 >[negroni] Completed 200 OK in 172.918µs >[asynchttp] INFO 2018/06/08 08:36:38 asynchttp.go:292: Completed job 7013c42e401275e0fd4a27333367afe1 in 13.175734ms >[negroni] Started GET /queue/7013c42e401275e0fd4a27333367afe1 >[negroni] Completed 204 No Content in 219.178µs >[negroni] Started DELETE /devices/8d13a1486235f77a6360e21086e3d14b >[heketi] INFO 2018/06/08 08:36:39 Deleting device 8d13a1486235f77a6360e21086e3d14b on node 278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 202 Accepted in 1.531976ms >[asynchttp] INFO 2018/06/08 08:36:39 asynchttp.go:288: Started job 70b67b8f1fe31c7045ad0415fd4e36e9 >[negroni] Started GET /queue/70b67b8f1fe31c7045ad0415fd4e36e9 >[negroni] Completed 200 OK in 180.419µs >[kubeexec] DEBUG 2018/06/08 08:36:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: vgremove vg_8d13a1486235f77a6360e21086e3d14b >Result: Volume group "vg_8d13a1486235f77a6360e21086e3d14b" successfully removed >[kubeexec] DEBUG 2018/06/08 08:36:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: pvremove '/dev/sdf' >Result: Labels on physical volume "/dev/sdf" successfully wiped. >[kubeexec] ERROR 2018/06/08 08:36:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [ls /var/lib/heketi/mounts/vg_8d13a1486235f77a6360e21086e3d14b] on glusterfs-storage-vsh2m: Err[command terminated with exit code 2]: Stdout []: Stderr [ls: cannot access /var/lib/heketi/mounts/vg_8d13a1486235f77a6360e21086e3d14b: No such file or directory >] >[heketi] INFO 2018/06/08 08:36:40 Deleted node [8d13a1486235f77a6360e21086e3d14b] >[asynchttp] INFO 2018/06/08 08:36:40 asynchttp.go:292: Completed job 70b67b8f1fe31c7045ad0415fd4e36e9 in 435.186345ms >[negroni] Started GET /queue/70b67b8f1fe31c7045ad0415fd4e36e9 >[negroni] Completed 204 No Content in 216.916µs >[negroni] Started POST /devices/e883b6f7d90329384bba2aeb2c9a34b6/state >[negroni] Completed 202 Accepted in 4.157877ms >[asynchttp] INFO 2018/06/08 08:36:40 asynchttp.go:288: Started job 3115668335a67e168219486afcf5b70d >[negroni] Started GET /queue/3115668335a67e168219486afcf5b70d >[negroni] Completed 200 OK in 194.522µs >[asynchttp] INFO 2018/06/08 08:36:40 asynchttp.go:292: Completed job 3115668335a67e168219486afcf5b70d in 9.442688ms >[negroni] Started GET /queue/3115668335a67e168219486afcf5b70d >[negroni] Completed 204 No Content in 190.105µs >[negroni] Started POST /devices/e883b6f7d90329384bba2aeb2c9a34b6/state >[negroni] Completed 202 Accepted in 3.353577ms >[asynchttp] INFO 2018/06/08 08:36:41 asynchttp.go:288: Started job f23d9e4ef280417862547668aea83868 >[heketi] INFO 2018/06/08 08:36:41 Running Remove Device >[negroni] Started GET /queue/f23d9e4ef280417862547668aea83868 >[negroni] Completed 200 OK in 163.723µs >[asynchttp] INFO 2018/06/08 08:36:41 asynchttp.go:292: Completed job f23d9e4ef280417862547668aea83868 in 22.396784ms >[negroni] Started GET /queue/f23d9e4ef280417862547668aea83868 >[negroni] Completed 204 No Content in 185.333µs >[negroni] Started DELETE /devices/e883b6f7d90329384bba2aeb2c9a34b6 >[heketi] INFO 2018/06/08 08:36:43 Deleting device e883b6f7d90329384bba2aeb2c9a34b6 on node 70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 202 Accepted in 1.496931ms >[asynchttp] INFO 2018/06/08 08:36:43 asynchttp.go:288: Started job d3076912264b8a47eda6607a096b1636 >[negroni] Started GET /queue/d3076912264b8a47eda6607a096b1636 >[negroni] Completed 200 OK in 127.117µs >[kubeexec] DEBUG 2018/06/08 08:36:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: vgremove vg_e883b6f7d90329384bba2aeb2c9a34b6 >Result: Volume group "vg_e883b6f7d90329384bba2aeb2c9a34b6" successfully removed >[kubeexec] DEBUG 2018/06/08 08:36:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: pvremove '/dev/sdf' >Result: Labels on physical volume "/dev/sdf" successfully wiped. >[kubeexec] ERROR 2018/06/08 08:36:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [ls /var/lib/heketi/mounts/vg_e883b6f7d90329384bba2aeb2c9a34b6] on glusterfs-storage-pg4xc: Err[command terminated with exit code 2]: Stdout []: Stderr [ls: cannot access /var/lib/heketi/mounts/vg_e883b6f7d90329384bba2aeb2c9a34b6: No such file or directory >] >[heketi] INFO 2018/06/08 08:36:43 Deleted node [e883b6f7d90329384bba2aeb2c9a34b6] >[asynchttp] INFO 2018/06/08 08:36:43 asynchttp.go:292: Completed job d3076912264b8a47eda6607a096b1636 in 459.923605ms >[negroni] Started GET /queue/d3076912264b8a47eda6607a096b1636 >[negroni] Completed 204 No Content in 138.955µs >[negroni] Started POST /devices/abadb4ba1e7781b57e43a25ef34a08cc/state >[negroni] Completed 202 Accepted in 3.762531ms >[asynchttp] INFO 2018/06/08 08:36:44 asynchttp.go:288: Started job 3ac746dece871332cf300128197e54d6 >[negroni] Started GET /queue/3ac746dece871332cf300128197e54d6 >[negroni] Completed 200 OK in 180.473µs >[asynchttp] INFO 2018/06/08 08:36:44 asynchttp.go:292: Completed job 3ac746dece871332cf300128197e54d6 in 10.8561ms >[negroni] Started GET /queue/3ac746dece871332cf300128197e54d6 >[negroni] Completed 204 No Content in 160.731µs >[negroni] Started POST /devices/abadb4ba1e7781b57e43a25ef34a08cc/state >[negroni] Completed 202 Accepted in 3.89285ms >[asynchttp] INFO 2018/06/08 08:36:45 asynchttp.go:288: Started job c63b6a3cd977592e26f919e8c94751a4 >[heketi] INFO 2018/06/08 08:36:45 Running Remove Device >[negroni] Started GET /queue/c63b6a3cd977592e26f919e8c94751a4 >[negroni] Completed 200 OK in 160.225µs >[asynchttp] INFO 2018/06/08 08:36:45 asynchttp.go:292: Completed job c63b6a3cd977592e26f919e8c94751a4 in 14.303126ms >[negroni] Started GET /queue/c63b6a3cd977592e26f919e8c94751a4 >[negroni] Completed 204 No Content in 182.061µs >[negroni] Started DELETE /devices/abadb4ba1e7781b57e43a25ef34a08cc >[heketi] INFO 2018/06/08 08:36:46 Deleting device abadb4ba1e7781b57e43a25ef34a08cc on node d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 202 Accepted in 1.318764ms >[asynchttp] INFO 2018/06/08 08:36:46 asynchttp.go:288: Started job ce6706f0f75d5d4bd8d83414d79d93db >[negroni] Started GET /queue/ce6706f0f75d5d4bd8d83414d79d93db >[negroni] Completed 200 OK in 298.93µs >[kubeexec] DEBUG 2018/06/08 08:36:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: vgremove vg_abadb4ba1e7781b57e43a25ef34a08cc >Result: Volume group "vg_abadb4ba1e7781b57e43a25ef34a08cc" successfully removed >[kubeexec] DEBUG 2018/06/08 08:36:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: pvremove '/dev/sdf' >Result: Labels on physical volume "/dev/sdf" successfully wiped. >[kubeexec] ERROR 2018/06/08 08:36:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [ls /var/lib/heketi/mounts/vg_abadb4ba1e7781b57e43a25ef34a08cc] on glusterfs-storage-gxp7c: Err[command terminated with exit code 2]: Stdout []: Stderr [ls: cannot access /var/lib/heketi/mounts/vg_abadb4ba1e7781b57e43a25ef34a08cc: No such file or directory >] >[heketi] INFO 2018/06/08 08:36:46 Deleted node [abadb4ba1e7781b57e43a25ef34a08cc] >[asynchttp] INFO 2018/06/08 08:36:46 asynchttp.go:292: Completed job ce6706f0f75d5d4bd8d83414d79d93db in 480.222279ms >[negroni] Started GET /queue/ce6706f0f75d5d4bd8d83414d79d93db >[negroni] Completed 204 No Content in 201.273µs >[negroni] Started DELETE /volumes/09be59d82458c2dc19aded51d4f93dae >[negroni] Completed 202 Accepted in 13.760757ms >[asynchttp] INFO 2018/06/08 08:36:47 asynchttp.go:288: Started job 7a23b9eb593cf5579b7ed54e9f496268 >[heketi] INFO 2018/06/08 08:36:47 Started async operation: Delete Volume >[negroni] Started GET /queue/7a23b9eb593cf5579b7ed54e9f496268 >[negroni] Completed 200 OK in 185.475µs >[kubeexec] DEBUG 2018/06/08 08:36:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_09be59d82458c2dc19aded51d4f93dae --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/7a23b9eb593cf5579b7ed54e9f496268 >[negroni] Completed 200 OK in 182.002µs >[negroni] Started GET /queue/7a23b9eb593cf5579b7ed54e9f496268 >[negroni] Completed 200 OK in 181.004µs >[kubeexec] DEBUG 2018/06/08 08:36:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume stop vol_09be59d82458c2dc19aded51d4f93dae force >Result: volume stop: vol_09be59d82458c2dc19aded51d4f93dae: success >[kubeexec] DEBUG 2018/06/08 08:36:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume delete vol_09be59d82458c2dc19aded51d4f93dae >Result: volume delete: vol_09be59d82458c2dc19aded51d4f93dae: success >[heketi] INFO 2018/06/08 08:36:50 Deleting brick 4c805bb73e44b37c40a402ffd4d57d64 >[heketi] INFO 2018/06/08 08:36:50 Deleting brick 7163d990d4bfe7b8a4e7d72738f24672 >[heketi] INFO 2018/06/08 08:36:50 Deleting brick 7895297323e5925bc0656d485ca1e227 >[kubeexec] DEBUG 2018/06/08 08:36:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c805bb73e44b37c40a402ffd4d57d64 | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_4c805bb73e44b37c40a402ffd4d57d64 >[kubeexec] DEBUG 2018/06/08 08:36:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7163d990d4bfe7b8a4e7d72738f24672 | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_7163d990d4bfe7b8a4e7d72738f24672 >[kubeexec] DEBUG 2018/06/08 08:36:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7895297323e5925bc0656d485ca1e227 | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7895297323e5925bc0656d485ca1e227 >[negroni] Started GET /queue/7a23b9eb593cf5579b7ed54e9f496268 >[negroni] Completed 200 OK in 312.857µs >[kubeexec] DEBUG 2018/06/08 08:36:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_7163d990d4bfe7b8a4e7d72738f24672 > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_7163d990d4bfe7b8a4e7d72738f24672 >[kubeexec] DEBUG 2018/06/08 08:36:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_4c805bb73e44b37c40a402ffd4d57d64 > >Result: vg_3a4297677881963e3f80124971d50eea/tp_4c805bb73e44b37c40a402ffd4d57d64 >[kubeexec] DEBUG 2018/06/08 08:36:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7895297323e5925bc0656d485ca1e227 > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_7895297323e5925bc0656d485ca1e227 >[negroni] Started GET /queue/7a23b9eb593cf5579b7ed54e9f496268 >[negroni] Completed 200 OK in 245.272µs >[kubeexec] DEBUG 2018/06/08 08:36:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7163d990d4bfe7b8a4e7d72738f24672 >Result: >[kubeexec] DEBUG 2018/06/08 08:36:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c805bb73e44b37c40a402ffd4d57d64 >Result: >[negroni] Started GET /queue/7a23b9eb593cf5579b7ed54e9f496268 >[negroni] Completed 200 OK in 128.077µs >[kubeexec] DEBUG 2018/06/08 08:36:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7895297323e5925bc0656d485ca1e227 >Result: >[kubeexec] DEBUG 2018/06/08 08:36:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_7163d990d4bfe7b8a4e7d72738f24672/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:36:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_4c805bb73e44b37c40a402ffd4d57d64/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/7a23b9eb593cf5579b7ed54e9f496268 >[negroni] Completed 200 OK in 196.706µs >[kubeexec] DEBUG 2018/06/08 08:36:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_7895297323e5925bc0656d485ca1e227/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:36:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_7163d990d4bfe7b8a4e7d72738f24672 > >Result: Logical volume "brick_7163d990d4bfe7b8a4e7d72738f24672" successfully removed >[kubeexec] DEBUG 2018/06/08 08:36:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_4c805bb73e44b37c40a402ffd4d57d64 > >Result: Logical volume "brick_4c805bb73e44b37c40a402ffd4d57d64" successfully removed >[negroni] Started GET /queue/7a23b9eb593cf5579b7ed54e9f496268 >[negroni] Completed 200 OK in 160.827µs >[kubeexec] DEBUG 2018/06/08 08:36:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7895297323e5925bc0656d485ca1e227 > >Result: Logical volume "brick_7895297323e5925bc0656d485ca1e227" successfully removed >[kubeexec] DEBUG 2018/06/08 08:36:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_7163d990d4bfe7b8a4e7d72738f24672 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:36:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_4c805bb73e44b37c40a402ffd4d57d64 > >Result: 0 >[negroni] Started GET /queue/7a23b9eb593cf5579b7ed54e9f496268 >[negroni] Completed 200 OK in 197.665µs >[kubeexec] DEBUG 2018/06/08 08:36:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_7895297323e5925bc0656d485ca1e227 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:36:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_7163d990d4bfe7b8a4e7d72738f24672 > >Result: Logical volume "tp_7163d990d4bfe7b8a4e7d72738f24672" successfully removed >[negroni] Started GET /queue/7a23b9eb593cf5579b7ed54e9f496268 >[negroni] Completed 200 OK in 229.258µs >[kubeexec] DEBUG 2018/06/08 08:36:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_4c805bb73e44b37c40a402ffd4d57d64 > >Result: Logical volume "tp_4c805bb73e44b37c40a402ffd4d57d64" successfully removed >[kubeexec] DEBUG 2018/06/08 08:36:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_7895297323e5925bc0656d485ca1e227 > >Result: Logical volume "tp_7895297323e5925bc0656d485ca1e227" successfully removed >[negroni] Started GET /queue/7a23b9eb593cf5579b7ed54e9f496268 >[negroni] Completed 200 OK in 215.885µs >[kubeexec] DEBUG 2018/06/08 08:36:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7163d990d4bfe7b8a4e7d72738f24672 >Result: >[kubeexec] DEBUG 2018/06/08 08:36:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_4c805bb73e44b37c40a402ffd4d57d64 >Result: >[kubeexec] DEBUG 2018/06/08 08:36:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7895297323e5925bc0656d485ca1e227 >Result: >[heketi] INFO 2018/06/08 08:36:58 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:36:58 asynchttp.go:292: Completed job 7a23b9eb593cf5579b7ed54e9f496268 in 10.667741714s >[negroni] Started GET /queue/7a23b9eb593cf5579b7ed54e9f496268 >[negroni] Completed 204 No Content in 226.127µs >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:36:58 Allocating brick set #0 >[negroni] Completed 202 Accepted in 17.409592ms >[asynchttp] INFO 2018/06/08 08:36:58 asynchttp.go:288: Started job 6cfc77e324d4cbd460708e0cad397c9c >[heketi] INFO 2018/06/08 08:36:58 Started async operation: Create Volume >[negroni] Started GET /queue/6cfc77e324d4cbd460708e0cad397c9c >[negroni] Completed 200 OK in 124.474µs >[heketi] INFO 2018/06/08 08:36:58 Creating brick f526234df2e4ac0fa1c80096c3759590 >[heketi] INFO 2018/06/08 08:36:58 Creating brick 6ba663d12286dbed5c5caa2b3d947f69 >[heketi] INFO 2018/06/08 08:36:58 Creating brick 07da7af9f0e00909886f0c79a0eef485 >[kubeexec] DEBUG 2018/06/08 08:36:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f526234df2e4ac0fa1c80096c3759590 >Result: >[kubeexec] DEBUG 2018/06/08 08:36:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_07da7af9f0e00909886f0c79a0eef485 >Result: >[kubeexec] DEBUG 2018/06/08 08:36:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_3a4297677881963e3f80124971d50eea/tp_f526234df2e4ac0fa1c80096c3759590 --virtualsize 10485760K --name brick_f526234df2e4ac0fa1c80096c3759590 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_f526234df2e4ac0fa1c80096c3759590" created. >[kubeexec] DEBUG 2018/06/08 08:36:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_6ba663d12286dbed5c5caa2b3d947f69 >Result: >[kubeexec] DEBUG 2018/06/08 08:36:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_f526234df2e4ac0fa1c80096c3759590 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_f526234df2e4ac0fa1c80096c3759590 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:36:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_d389f0278a774bd7443a09af960961d8/tp_07da7af9f0e00909886f0c79a0eef485 --virtualsize 10485760K --name brick_07da7af9f0e00909886f0c79a0eef485 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_07da7af9f0e00909886f0c79a0eef485" created. >[kubeexec] DEBUG 2018/06/08 08:36:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_f526234df2e4ac0fa1c80096c3759590 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f526234df2e4ac0fa1c80096c3759590 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:36:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_07da7af9f0e00909886f0c79a0eef485 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_07da7af9f0e00909886f0c79a0eef485 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:36:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_f526234df2e4ac0fa1c80096c3759590 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f526234df2e4ac0fa1c80096c3759590 >Result: >[kubeexec] DEBUG 2018/06/08 08:36:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_6ba663d12286dbed5c5caa2b3d947f69 --virtualsize 10485760K --name brick_6ba663d12286dbed5c5caa2b3d947f69 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_6ba663d12286dbed5c5caa2b3d947f69" created. >[negroni] Started GET /queue/6cfc77e324d4cbd460708e0cad397c9c >[negroni] Completed 200 OK in 118.69µs >[kubeexec] DEBUG 2018/06/08 08:36:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_07da7af9f0e00909886f0c79a0eef485 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_07da7af9f0e00909886f0c79a0eef485 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:36:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f526234df2e4ac0fa1c80096c3759590/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:36:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_6ba663d12286dbed5c5caa2b3d947f69 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_6ba663d12286dbed5c5caa2b3d947f69 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:36:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_07da7af9f0e00909886f0c79a0eef485 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_07da7af9f0e00909886f0c79a0eef485 >Result: >[kubeexec] DEBUG 2018/06/08 08:37:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_6ba663d12286dbed5c5caa2b3d947f69 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_6ba663d12286dbed5c5caa2b3d947f69 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:37:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_07da7af9f0e00909886f0c79a0eef485/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:37:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_6ba663d12286dbed5c5caa2b3d947f69 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_6ba663d12286dbed5c5caa2b3d947f69 >Result: >[kubeexec] DEBUG 2018/06/08 08:37:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_6ba663d12286dbed5c5caa2b3d947f69/brick >Result: >[cmdexec] INFO 2018/06/08 08:37:00 Creating volume vol_a29f96e611e83fe05af97f0b595cb460 replica 3 >[kubeexec] DEBUG 2018/06/08 08:37:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_a29f96e611e83fe05af97f0b595cb460 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f526234df2e4ac0fa1c80096c3759590/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_6ba663d12286dbed5c5caa2b3d947f69/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_07da7af9f0e00909886f0c79a0eef485/brick >Result: volume create: vol_a29f96e611e83fe05af97f0b595cb460: success: please start the volume to access data >[negroni] Started GET /queue/6cfc77e324d4cbd460708e0cad397c9c >[negroni] Completed 200 OK in 151.983µs >[negroni] Started GET /queue/6cfc77e324d4cbd460708e0cad397c9c >[negroni] Completed 200 OK in 131.357µs >[kubeexec] DEBUG 2018/06/08 08:37:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_a29f96e611e83fe05af97f0b595cb460 >Result: volume start: vol_a29f96e611e83fe05af97f0b595cb460: success >[heketi] INFO 2018/06/08 08:37:02 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:37:02 asynchttp.go:292: Completed job 6cfc77e324d4cbd460708e0cad397c9c in 3.990543659s >[negroni] Started GET /queue/6cfc77e324d4cbd460708e0cad397c9c >[negroni] Completed 303 See Other in 199.362µs >[negroni] Started GET /volumes/a29f96e611e83fe05af97f0b595cb460 >[negroni] Completed 200 OK in 5.749052ms >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 171.625µs >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 279.865µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 3.200999ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.764225ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 891.442µs >[negroni] Started GET /volumes/a29f96e611e83fe05af97f0b595cb460 >[negroni] Completed 200 OK in 1.235017ms >[negroni] Started POST /volumes/a29f96e611e83fe05af97f0b595cb460/expand >[heketi] INFO 2018/06/08 08:37:04 Allocating brick set #0 >[negroni] Completed 202 Accepted in 9.336857ms >[asynchttp] INFO 2018/06/08 08:37:04 asynchttp.go:288: Started job 5dc1008f9ebb211ba152dd249430d699 >[heketi] INFO 2018/06/08 08:37:04 Started async operation: Expand Volume >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 200 OK in 100.189µs >[heketi] INFO 2018/06/08 08:37:04 Creating brick 46ec6998de7cdc580ac03be455e5a4a6 >[heketi] INFO 2018/06/08 08:37:04 Creating brick 097f338e8c8aee735f656f0b48d684fa >[heketi] INFO 2018/06/08 08:37:04 Creating brick 57fb807839bd9ec9be7d29605127df0f >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_46ec6998de7cdc580ac03be455e5a4a6 >Result: >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_097f338e8c8aee735f656f0b48d684fa >Result: >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_57fb807839bd9ec9be7d29605127df0f >Result: >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 16384K --chunksize 256K --size 3145728K --thin vg_3a4297677881963e3f80124971d50eea/tp_097f338e8c8aee735f656f0b48d684fa --virtualsize 3145728K --name brick_097f338e8c8aee735f656f0b48d684fa >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_097f338e8c8aee735f656f0b48d684fa" created. >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 16384K --chunksize 256K --size 3145728K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_57fb807839bd9ec9be7d29605127df0f --virtualsize 3145728K --name brick_57fb807839bd9ec9be7d29605127df0f >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_57fb807839bd9ec9be7d29605127df0f" created. >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 16384K --chunksize 256K --size 3145728K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_46ec6998de7cdc580ac03be455e5a4a6 --virtualsize 3145728K --name brick_46ec6998de7cdc580ac03be455e5a4a6 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_46ec6998de7cdc580ac03be455e5a4a6" created. >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_57fb807839bd9ec9be7d29605127df0f >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_57fb807839bd9ec9be7d29605127df0f isize=512 agcount=8, agsize=98304 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=786432, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_097f338e8c8aee735f656f0b48d684fa >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_097f338e8c8aee735f656f0b48d684fa isize=512 agcount=8, agsize=98304 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=786432, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_46ec6998de7cdc580ac03be455e5a4a6 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_46ec6998de7cdc580ac03be455e5a4a6 isize=512 agcount=8, agsize=98304 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=786432, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_57fb807839bd9ec9be7d29605127df0f /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_57fb807839bd9ec9be7d29605127df0f xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_097f338e8c8aee735f656f0b48d684fa /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_097f338e8c8aee735f656f0b48d684fa xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_46ec6998de7cdc580ac03be455e5a4a6 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_46ec6998de7cdc580ac03be455e5a4a6 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_57fb807839bd9ec9be7d29605127df0f /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_57fb807839bd9ec9be7d29605127df0f >Result: >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_46ec6998de7cdc580ac03be455e5a4a6 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_46ec6998de7cdc580ac03be455e5a4a6 >Result: >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_097f338e8c8aee735f656f0b48d684fa /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_097f338e8c8aee735f656f0b48d684fa >Result: >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_57fb807839bd9ec9be7d29605127df0f/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_097f338e8c8aee735f656f0b48d684fa/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:37:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_46ec6998de7cdc580ac03be455e5a4a6/brick >Result: >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 200 OK in 155.51µs >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 200 OK in 228.382µs >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 200 OK in 133.282µs >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 200 OK in 167.744µs >[kubeexec] DEBUG 2018/06/08 08:37:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume add-brick vol_a29f96e611e83fe05af97f0b595cb460 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_46ec6998de7cdc580ac03be455e5a4a6/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_57fb807839bd9ec9be7d29605127df0f/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_097f338e8c8aee735f656f0b48d684fa/brick >Result: volume add-brick: success >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 200 OK in 152.303µs >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 200 OK in 137.811µs >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 200 OK in 132.114µs >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 200 OK in 157.453µs >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 200 OK in 277.868µs >[heketi] INFO 2018/06/08 08:37:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:37:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:37:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 13min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ11007 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:37:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:37:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:37:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 13min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ11243 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:37:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:37:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 200 OK in 234.812µs >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 200 OK in 225.148µs >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 200 OK in 162.685µs >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 200 OK in 279.348µs >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 200 OK in 230.472µs >[kubeexec] DEBUG 2018/06/08 08:37:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume rebalance vol_a29f96e611e83fe05af97f0b595cb460 start >Result: volume rebalance: vol_a29f96e611e83fe05af97f0b595cb460: success: Rebalance on vol_a29f96e611e83fe05af97f0b595cb460 has been started successfully. Use rebalance status command to check status of the rebalance process. >ID: a49b6489-335d-4802-b5cb-dd6f098d13e7 > >[heketi] INFO 2018/06/08 08:37:19 Expand Volume succeeded >[asynchttp] INFO 2018/06/08 08:37:19 asynchttp.go:292: Completed job 5dc1008f9ebb211ba152dd249430d699 in 14.526564152s >[kubeexec] DEBUG 2018/06/08 08:37:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 11min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ9232 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:37:19 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:37:19 Cleaned 0 nodes from health cache >[negroni] Started GET /queue/5dc1008f9ebb211ba152dd249430d699 >[negroni] Completed 303 See Other in 188.405µs >[negroni] Started GET /volumes/a29f96e611e83fe05af97f0b595cb460 >[negroni] Completed 200 OK in 6.829644ms >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 235.685µs >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 356.846µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 3.29431ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 2.58317ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 968.916µs >[negroni] Started GET /volumes/a29f96e611e83fe05af97f0b595cb460 >[negroni] Completed 200 OK in 927.28µs >[negroni] Started POST /volumes/a29f96e611e83fe05af97f0b595cb460/expand >[heketi] INFO 2018/06/08 08:37:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 10.254939ms >[asynchttp] INFO 2018/06/08 08:37:23 asynchttp.go:288: Started job 175d13daae1213f905e671b4e3303d57 >[heketi] INFO 2018/06/08 08:37:23 Started async operation: Expand Volume >[heketi] INFO 2018/06/08 08:37:23 Creating brick 27443c3e5bbd70edaecc2f07a080b837 >[heketi] INFO 2018/06/08 08:37:23 Creating brick fa2fd6ed4afbf5ab55c24a6bec103abb >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[heketi] INFO 2018/06/08 08:37:23 Creating brick 2a96d70db787cec2e5621e5eb6d0fd04 >[negroni] Completed 200 OK in 108.961µs >[kubeexec] DEBUG 2018/06/08 08:37:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_27443c3e5bbd70edaecc2f07a080b837 >Result: >[kubeexec] DEBUG 2018/06/08 08:37:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2a96d70db787cec2e5621e5eb6d0fd04 >Result: >[kubeexec] DEBUG 2018/06/08 08:37:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_fa2fd6ed4afbf5ab55c24a6bec103abb >Result: >[kubeexec] DEBUG 2018/06/08 08:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 16384K --chunksize 256K --size 3145728K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_2a96d70db787cec2e5621e5eb6d0fd04 --virtualsize 3145728K --name brick_2a96d70db787cec2e5621e5eb6d0fd04 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_2a96d70db787cec2e5621e5eb6d0fd04" created. >[kubeexec] DEBUG 2018/06/08 08:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 16384K --chunksize 256K --size 3145728K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_27443c3e5bbd70edaecc2f07a080b837 --virtualsize 3145728K --name brick_27443c3e5bbd70edaecc2f07a080b837 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_27443c3e5bbd70edaecc2f07a080b837" created. >[kubeexec] DEBUG 2018/06/08 08:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 16384K --chunksize 256K --size 3145728K --thin vg_d389f0278a774bd7443a09af960961d8/tp_fa2fd6ed4afbf5ab55c24a6bec103abb --virtualsize 3145728K --name brick_fa2fd6ed4afbf5ab55c24a6bec103abb >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_fa2fd6ed4afbf5ab55c24a6bec103abb" created. >[kubeexec] DEBUG 2018/06/08 08:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2a96d70db787cec2e5621e5eb6d0fd04 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2a96d70db787cec2e5621e5eb6d0fd04 isize=512 agcount=8, agsize=98304 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=786432, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_27443c3e5bbd70edaecc2f07a080b837 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_27443c3e5bbd70edaecc2f07a080b837 isize=512 agcount=8, agsize=98304 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=786432, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_fa2fd6ed4afbf5ab55c24a6bec103abb >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_fa2fd6ed4afbf5ab55c24a6bec103abb isize=512 agcount=8, agsize=98304 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=786432, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_27443c3e5bbd70edaecc2f07a080b837 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_27443c3e5bbd70edaecc2f07a080b837 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2a96d70db787cec2e5621e5eb6d0fd04 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2a96d70db787cec2e5621e5eb6d0fd04 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_fa2fd6ed4afbf5ab55c24a6bec103abb /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_fa2fd6ed4afbf5ab55c24a6bec103abb xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2a96d70db787cec2e5621e5eb6d0fd04 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2a96d70db787cec2e5621e5eb6d0fd04 >Result: >[kubeexec] DEBUG 2018/06/08 08:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_27443c3e5bbd70edaecc2f07a080b837 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_27443c3e5bbd70edaecc2f07a080b837 >Result: >[kubeexec] DEBUG 2018/06/08 08:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_fa2fd6ed4afbf5ab55c24a6bec103abb /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_fa2fd6ed4afbf5ab55c24a6bec103abb >Result: >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[negroni] Completed 200 OK in 158.726µs >[kubeexec] DEBUG 2018/06/08 08:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2a96d70db787cec2e5621e5eb6d0fd04/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_27443c3e5bbd70edaecc2f07a080b837/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:37:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_fa2fd6ed4afbf5ab55c24a6bec103abb/brick >Result: >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[negroni] Completed 200 OK in 160.98µs >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[negroni] Completed 200 OK in 359.064µs >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[negroni] Completed 200 OK in 259.501µs >[kubeexec] DEBUG 2018/06/08 08:37:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume add-brick vol_a29f96e611e83fe05af97f0b595cb460 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_27443c3e5bbd70edaecc2f07a080b837/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_fa2fd6ed4afbf5ab55c24a6bec103abb/brick 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2a96d70db787cec2e5621e5eb6d0fd04/brick >Result: volume add-brick: success >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[negroni] Completed 200 OK in 140.95µs >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[negroni] Completed 200 OK in 245.622µs >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[negroni] Completed 200 OK in 359.921µs >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[negroni] Completed 200 OK in 138.929µs >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[negroni] Completed 200 OK in 180.991µs >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[negroni] Completed 200 OK in 234.938µs >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[negroni] Completed 200 OK in 162.57µs >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[negroni] Completed 200 OK in 229.742µs >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[negroni] Completed 200 OK in 187.447µs >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[negroni] Completed 200 OK in 274.038µs >[kubeexec] DEBUG 2018/06/08 08:37:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume rebalance vol_a29f96e611e83fe05af97f0b595cb460 start >Result: volume rebalance: vol_a29f96e611e83fe05af97f0b595cb460: success: Rebalance on vol_a29f96e611e83fe05af97f0b595cb460 has been started successfully. Use rebalance status command to check status of the rebalance process. >ID: 46590f33-516f-4a31-984d-e29c7f233067 > >[heketi] INFO 2018/06/08 08:37:38 Expand Volume succeeded >[asynchttp] INFO 2018/06/08 08:37:38 asynchttp.go:292: Completed job 175d13daae1213f905e671b4e3303d57 in 14.7851187s >[negroni] Started GET /queue/175d13daae1213f905e671b4e3303d57 >[negroni] Completed 303 See Other in 257.718µs >[negroni] Started GET /volumes/a29f96e611e83fe05af97f0b595cb460 >[negroni] Completed 200 OK in 5.356655ms >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 271.585µs >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 289.32µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 4.282973ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 2.440086ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 1.875202ms >[negroni] Started GET /volumes/a29f96e611e83fe05af97f0b595cb460 >[negroni] Completed 200 OK in 1.571782ms >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 159.169µs >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 311.999µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 1.024889ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.674015ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 1.119936ms >[negroni] Started DELETE /volumes/a29f96e611e83fe05af97f0b595cb460 >[negroni] Completed 202 Accepted in 11.526063ms >[asynchttp] INFO 2018/06/08 08:37:42 asynchttp.go:288: Started job 598142190d8886b152c94f68df77e0e2 >[heketi] INFO 2018/06/08 08:37:42 Started async operation: Delete Volume >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 153.633µs >[kubeexec] DEBUG 2018/06/08 08:37:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script snapshot list vol_a29f96e611e83fe05af97f0b595cb460 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 120.55µs >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 263.018µs >[kubeexec] DEBUG 2018/06/08 08:37:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume stop vol_a29f96e611e83fe05af97f0b595cb460 force >Result: volume stop: vol_a29f96e611e83fe05af97f0b595cb460: success >[kubeexec] DEBUG 2018/06/08 08:37:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume delete vol_a29f96e611e83fe05af97f0b595cb460 >Result: volume delete: vol_a29f96e611e83fe05af97f0b595cb460: success >[heketi] INFO 2018/06/08 08:37:45 Deleting brick 07da7af9f0e00909886f0c79a0eef485 >[heketi] INFO 2018/06/08 08:37:45 Deleting brick 097f338e8c8aee735f656f0b48d684fa >[heketi] INFO 2018/06/08 08:37:45 Deleting brick 27443c3e5bbd70edaecc2f07a080b837 >[heketi] INFO 2018/06/08 08:37:45 Deleting brick 2a96d70db787cec2e5621e5eb6d0fd04 >[heketi] INFO 2018/06/08 08:37:45 Deleting brick 46ec6998de7cdc580ac03be455e5a4a6 >[heketi] INFO 2018/06/08 08:37:45 Deleting brick 57fb807839bd9ec9be7d29605127df0f >[heketi] INFO 2018/06/08 08:37:45 Deleting brick f526234df2e4ac0fa1c80096c3759590 >[heketi] INFO 2018/06/08 08:37:45 Deleting brick 6ba663d12286dbed5c5caa2b3d947f69 >[heketi] INFO 2018/06/08 08:37:45 Deleting brick fa2fd6ed4afbf5ab55c24a6bec103abb >[kubeexec] DEBUG 2018/06/08 08:37:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_27443c3e5bbd70edaecc2f07a080b837 | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_27443c3e5bbd70edaecc2f07a080b837 >[kubeexec] DEBUG 2018/06/08 08:37:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_097f338e8c8aee735f656f0b48d684fa | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_097f338e8c8aee735f656f0b48d684fa >[kubeexec] DEBUG 2018/06/08 08:37:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_07da7af9f0e00909886f0c79a0eef485 | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_07da7af9f0e00909886f0c79a0eef485 >[kubeexec] DEBUG 2018/06/08 08:37:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_46ec6998de7cdc580ac03be455e5a4a6 | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_46ec6998de7cdc580ac03be455e5a4a6 >[kubeexec] DEBUG 2018/06/08 08:37:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2a96d70db787cec2e5621e5eb6d0fd04 | cut -d" " -f1 >Result: /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2a96d70db787cec2e5621e5eb6d0fd04 >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 152.324µs >[kubeexec] DEBUG 2018/06/08 08:37:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_57fb807839bd9ec9be7d29605127df0f | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_57fb807839bd9ec9be7d29605127df0f >[kubeexec] DEBUG 2018/06/08 08:37:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_6ba663d12286dbed5c5caa2b3d947f69 | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_6ba663d12286dbed5c5caa2b3d947f69 >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 136.554µs >[kubeexec] DEBUG 2018/06/08 08:37:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f526234df2e4ac0fa1c80096c3759590 | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_f526234df2e4ac0fa1c80096c3759590 >[kubeexec] DEBUG 2018/06/08 08:37:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_fa2fd6ed4afbf5ab55c24a6bec103abb | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_fa2fd6ed4afbf5ab55c24a6bec103abb >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 140.488µs >[kubeexec] DEBUG 2018/06/08 08:37:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_27443c3e5bbd70edaecc2f07a080b837 > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_27443c3e5bbd70edaecc2f07a080b837 >[kubeexec] DEBUG 2018/06/08 08:37:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_097f338e8c8aee735f656f0b48d684fa > >Result: vg_3a4297677881963e3f80124971d50eea/tp_097f338e8c8aee735f656f0b48d684fa >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 239.614µs >[kubeexec] DEBUG 2018/06/08 08:37:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_07da7af9f0e00909886f0c79a0eef485 > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_07da7af9f0e00909886f0c79a0eef485 >[kubeexec] DEBUG 2018/06/08 08:37:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_46ec6998de7cdc580ac03be455e5a4a6 > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_46ec6998de7cdc580ac03be455e5a4a6 >[kubeexec] DEBUG 2018/06/08 08:37:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2a96d70db787cec2e5621e5eb6d0fd04 > >Result: vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_2a96d70db787cec2e5621e5eb6d0fd04 >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 178.101µs >[kubeexec] DEBUG 2018/06/08 08:37:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_57fb807839bd9ec9be7d29605127df0f > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_57fb807839bd9ec9be7d29605127df0f >[kubeexec] DEBUG 2018/06/08 08:37:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_6ba663d12286dbed5c5caa2b3d947f69 > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_6ba663d12286dbed5c5caa2b3d947f69 >[kubeexec] DEBUG 2018/06/08 08:37:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_f526234df2e4ac0fa1c80096c3759590 > >Result: vg_3a4297677881963e3f80124971d50eea/tp_f526234df2e4ac0fa1c80096c3759590 >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 249.281µs >[kubeexec] DEBUG 2018/06/08 08:37:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_fa2fd6ed4afbf5ab55c24a6bec103abb > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_fa2fd6ed4afbf5ab55c24a6bec103abb >[kubeexec] DEBUG 2018/06/08 08:37:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_27443c3e5bbd70edaecc2f07a080b837 >Result: >[kubeexec] DEBUG 2018/06/08 08:37:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_097f338e8c8aee735f656f0b48d684fa >Result: >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 216.19µs >[kubeexec] DEBUG 2018/06/08 08:37:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_07da7af9f0e00909886f0c79a0eef485 >Result: >[kubeexec] DEBUG 2018/06/08 08:37:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_46ec6998de7cdc580ac03be455e5a4a6 >Result: >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 283.338µs >[kubeexec] DEBUG 2018/06/08 08:37:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2a96d70db787cec2e5621e5eb6d0fd04 >Result: >[kubeexec] DEBUG 2018/06/08 08:37:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_57fb807839bd9ec9be7d29605127df0f >Result: >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 160.548µs >[kubeexec] DEBUG 2018/06/08 08:37:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_6ba663d12286dbed5c5caa2b3d947f69 >Result: >[kubeexec] DEBUG 2018/06/08 08:37:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f526234df2e4ac0fa1c80096c3759590 >Result: >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 172.398µs >[kubeexec] DEBUG 2018/06/08 08:37:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_fa2fd6ed4afbf5ab55c24a6bec103abb >Result: >[kubeexec] DEBUG 2018/06/08 08:37:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_27443c3e5bbd70edaecc2f07a080b837/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:37:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_097f338e8c8aee735f656f0b48d684fa/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 233.938µs >[kubeexec] DEBUG 2018/06/08 08:37:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_07da7af9f0e00909886f0c79a0eef485/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:37:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_46ec6998de7cdc580ac03be455e5a4a6/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:37:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_2a96d70db787cec2e5621e5eb6d0fd04/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 226.582µs >[kubeexec] DEBUG 2018/06/08 08:37:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_57fb807839bd9ec9be7d29605127df0f/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:37:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_6ba663d12286dbed5c5caa2b3d947f69/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:37:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_f526234df2e4ac0fa1c80096c3759590/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 212.892µs >[kubeexec] DEBUG 2018/06/08 08:37:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_fa2fd6ed4afbf5ab55c24a6bec103abb/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:37:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_27443c3e5bbd70edaecc2f07a080b837 > >Result: Logical volume "brick_27443c3e5bbd70edaecc2f07a080b837" successfully removed >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 267.708µs >[kubeexec] DEBUG 2018/06/08 08:37:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_097f338e8c8aee735f656f0b48d684fa > >Result: Logical volume "brick_097f338e8c8aee735f656f0b48d684fa" successfully removed >[kubeexec] DEBUG 2018/06/08 08:37:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_07da7af9f0e00909886f0c79a0eef485 > >Result: Logical volume "brick_07da7af9f0e00909886f0c79a0eef485" successfully removed >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 192.782µs >[kubeexec] DEBUG 2018/06/08 08:37:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_46ec6998de7cdc580ac03be455e5a4a6 > >Result: Logical volume "brick_46ec6998de7cdc580ac03be455e5a4a6" successfully removed >[kubeexec] DEBUG 2018/06/08 08:37:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2a96d70db787cec2e5621e5eb6d0fd04 > >Result: Logical volume "brick_2a96d70db787cec2e5621e5eb6d0fd04" successfully removed >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 238.128µs >[kubeexec] DEBUG 2018/06/08 08:38:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_57fb807839bd9ec9be7d29605127df0f > >Result: Logical volume "brick_57fb807839bd9ec9be7d29605127df0f" successfully removed >[kubeexec] DEBUG 2018/06/08 08:38:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_6ba663d12286dbed5c5caa2b3d947f69 > >Result: Logical volume "brick_6ba663d12286dbed5c5caa2b3d947f69" successfully removed >[kubeexec] DEBUG 2018/06/08 08:38:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_f526234df2e4ac0fa1c80096c3759590 > >Result: Logical volume "brick_f526234df2e4ac0fa1c80096c3759590" successfully removed >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 152.348µs >[kubeexec] DEBUG 2018/06/08 08:38:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_fa2fd6ed4afbf5ab55c24a6bec103abb > >Result: Logical volume "brick_fa2fd6ed4afbf5ab55c24a6bec103abb" successfully removed >[kubeexec] DEBUG 2018/06/08 08:38:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_27443c3e5bbd70edaecc2f07a080b837 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:38:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_097f338e8c8aee735f656f0b48d684fa > >Result: 0 >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 236.582µs >[kubeexec] DEBUG 2018/06/08 08:38:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_07da7af9f0e00909886f0c79a0eef485 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:38:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_46ec6998de7cdc580ac03be455e5a4a6 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:38:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_2a96d70db787cec2e5621e5eb6d0fd04 > >Result: 0 >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 165.081µs >[kubeexec] DEBUG 2018/06/08 08:38:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_57fb807839bd9ec9be7d29605127df0f > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:38:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_6ba663d12286dbed5c5caa2b3d947f69 > >Result: 0 >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 133.581µs >[kubeexec] DEBUG 2018/06/08 08:38:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_f526234df2e4ac0fa1c80096c3759590 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:38:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_fa2fd6ed4afbf5ab55c24a6bec103abb > >Result: 0 >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 212.466µs >[kubeexec] DEBUG 2018/06/08 08:38:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_27443c3e5bbd70edaecc2f07a080b837 > >Result: Logical volume "tp_27443c3e5bbd70edaecc2f07a080b837" successfully removed >[kubeexec] DEBUG 2018/06/08 08:38:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_097f338e8c8aee735f656f0b48d684fa > >Result: Logical volume "tp_097f338e8c8aee735f656f0b48d684fa" successfully removed >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 216.788µs >[kubeexec] DEBUG 2018/06/08 08:38:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_07da7af9f0e00909886f0c79a0eef485 > >Result: Logical volume "tp_07da7af9f0e00909886f0c79a0eef485" successfully removed >[kubeexec] DEBUG 2018/06/08 08:38:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_46ec6998de7cdc580ac03be455e5a4a6 > >Result: Logical volume "tp_46ec6998de7cdc580ac03be455e5a4a6" successfully removed >[kubeexec] DEBUG 2018/06/08 08:38:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_2a96d70db787cec2e5621e5eb6d0fd04 > >Result: Logical volume "tp_2a96d70db787cec2e5621e5eb6d0fd04" successfully removed >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 154.068µs >[kubeexec] DEBUG 2018/06/08 08:38:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_57fb807839bd9ec9be7d29605127df0f > >Result: Logical volume "tp_57fb807839bd9ec9be7d29605127df0f" successfully removed >[kubeexec] DEBUG 2018/06/08 08:38:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_6ba663d12286dbed5c5caa2b3d947f69 > >Result: Logical volume "tp_6ba663d12286dbed5c5caa2b3d947f69" successfully removed >[kubeexec] DEBUG 2018/06/08 08:38:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_f526234df2e4ac0fa1c80096c3759590 > >Result: Logical volume "tp_f526234df2e4ac0fa1c80096c3759590" successfully removed >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 198.451µs >[kubeexec] DEBUG 2018/06/08 08:38:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_fa2fd6ed4afbf5ab55c24a6bec103abb > >Result: Logical volume "tp_fa2fd6ed4afbf5ab55c24a6bec103abb" successfully removed >[kubeexec] DEBUG 2018/06/08 08:38:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_27443c3e5bbd70edaecc2f07a080b837 >Result: >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 142.655µs >[kubeexec] DEBUG 2018/06/08 08:38:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_097f338e8c8aee735f656f0b48d684fa >Result: >[kubeexec] DEBUG 2018/06/08 08:38:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_07da7af9f0e00909886f0c79a0eef485 >Result: >[kubeexec] DEBUG 2018/06/08 08:38:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_46ec6998de7cdc580ac03be455e5a4a6 >Result: >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 205.889µs >[kubeexec] DEBUG 2018/06/08 08:38:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2a96d70db787cec2e5621e5eb6d0fd04 >Result: >[kubeexec] DEBUG 2018/06/08 08:38:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_57fb807839bd9ec9be7d29605127df0f >Result: >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 200 OK in 134.529µs >[kubeexec] DEBUG 2018/06/08 08:38:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_6ba663d12286dbed5c5caa2b3d947f69 >Result: >[kubeexec] DEBUG 2018/06/08 08:38:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f526234df2e4ac0fa1c80096c3759590 >Result: >[kubeexec] DEBUG 2018/06/08 08:38:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_fa2fd6ed4afbf5ab55c24a6bec103abb >Result: >[heketi] INFO 2018/06/08 08:38:12 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:38:12 asynchttp.go:292: Completed job 598142190d8886b152c94f68df77e0e2 in 29.804664552s >[negroni] Started GET /queue/598142190d8886b152c94f68df77e0e2 >[negroni] Completed 204 No Content in 129.935µs >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 2.472404ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 297.399µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 3.548846ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 2.061511ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 1.529223ms >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 223.305µs >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 274.892µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 834.907µs >[negroni] Started POST /devices >[heketi] INFO 2018/06/08 08:38:13 Adding device /dev/sdf to node 278bd6b4e16a8e62ef15aaae22e6abc1 >[asynchttp] INFO 2018/06/08 08:38:13 asynchttp.go:288: Started job 1cea9b6fd5211a402ce922cc12ee59a1 >[negroni] Completed 202 Accepted in 11.105243ms >[negroni] Started GET /queue/1cea9b6fd5211a402ce922cc12ee59a1 >[negroni] Completed 200 OK in 155.914µs >[kubeexec] DEBUG 2018/06/08 08:38:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: pvcreate --metadatasize=128M --dataalignment=256K '/dev/sdf' >Result: Physical volume "/dev/sdf" successfully created. >[kubeexec] DEBUG 2018/06/08 08:38:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: vgcreate --autobackup=n vg_6ecedef27ecea4a51dc0dbaeb83f2328 /dev/sdf >Result: Volume group "vg_6ecedef27ecea4a51dc0dbaeb83f2328" successfully created >[kubeexec] DEBUG 2018/06/08 08:38:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: vgdisplay -c vg_6ecedef27ecea4a51dc0dbaeb83f2328 >Result: vg_6ecedef27ecea4a51dc0dbaeb83f2328:r/w:772:-1:0:0:0:-1:0:1:1:104722432:4096:25567:0:25567:JbrxAE-Ue84-dyos-Lxwf-CTny-1zfk-WQtP80 >[cmdexec] DEBUG 2018/06/08 08:38:13 /src/github.com/heketi/heketi/executors/cmdexec/device.go:147: Size of /dev/sdf in dhcp46-187.lab.eng.blr.redhat.com is 104722432 >[heketi] INFO 2018/06/08 08:38:13 Added device /dev/sdf >[asynchttp] INFO 2018/06/08 08:38:13 asynchttp.go:292: Completed job 1cea9b6fd5211a402ce922cc12ee59a1 in 519.180802ms >[negroni] Started GET /queue/1cea9b6fd5211a402ce922cc12ee59a1 >[negroni] Completed 204 No Content in 256.471µs >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 6.897718ms >[negroni] Started POST /devices >[heketi] INFO 2018/06/08 08:38:14 Adding device /dev/sdf to node 70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 202 Accepted in 10.483709ms >[asynchttp] INFO 2018/06/08 08:38:14 asynchttp.go:288: Started job 609c4a963033c593d3ce23492977a6ab >[negroni] Started GET /queue/609c4a963033c593d3ce23492977a6ab >[negroni] Completed 200 OK in 126.494µs >[kubeexec] DEBUG 2018/06/08 08:38:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: pvcreate --metadatasize=128M --dataalignment=256K '/dev/sdf' >Result: Physical volume "/dev/sdf" successfully created. >[kubeexec] DEBUG 2018/06/08 08:38:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: vgcreate --autobackup=n vg_c7076a74d8fd3bf1c2dcfc77b8d01ea1 /dev/sdf >Result: Volume group "vg_c7076a74d8fd3bf1c2dcfc77b8d01ea1" successfully created >[kubeexec] DEBUG 2018/06/08 08:38:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: vgdisplay -c vg_c7076a74d8fd3bf1c2dcfc77b8d01ea1 >Result: vg_c7076a74d8fd3bf1c2dcfc77b8d01ea1:r/w:772:-1:0:0:0:-1:0:1:1:104722432:4096:25567:0:25567:bMTslf-fczU-xI8F-esVf-351y-FZEC-thqWUx >[cmdexec] DEBUG 2018/06/08 08:38:15 /src/github.com/heketi/heketi/executors/cmdexec/device.go:147: Size of /dev/sdf in dhcp46-122.lab.eng.blr.redhat.com is 104722432 >[heketi] INFO 2018/06/08 08:38:15 Added device /dev/sdf >[asynchttp] INFO 2018/06/08 08:38:15 asynchttp.go:292: Completed job 609c4a963033c593d3ce23492977a6ab in 592.10989ms >[negroni] Started GET /queue/609c4a963033c593d3ce23492977a6ab >[negroni] Completed 204 No Content in 189.55µs >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 4.938804ms >[negroni] Started POST /devices >[heketi] INFO 2018/06/08 08:38:15 Adding device /dev/sdf to node d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 202 Accepted in 8.246058ms >[asynchttp] INFO 2018/06/08 08:38:15 asynchttp.go:288: Started job 68a9e009b9ea3887ddbadce2e33271a4 >[negroni] Started GET /queue/68a9e009b9ea3887ddbadce2e33271a4 >[negroni] Completed 200 OK in 128.082µs >[kubeexec] DEBUG 2018/06/08 08:38:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: pvcreate --metadatasize=128M --dataalignment=256K '/dev/sdf' >Result: Physical volume "/dev/sdf" successfully created. >[kubeexec] DEBUG 2018/06/08 08:38:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: vgcreate --autobackup=n vg_624b4458a2f2db51030873f216ff446a /dev/sdf >Result: Volume group "vg_624b4458a2f2db51030873f216ff446a" successfully created >[kubeexec] DEBUG 2018/06/08 08:38:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: vgdisplay -c vg_624b4458a2f2db51030873f216ff446a >Result: vg_624b4458a2f2db51030873f216ff446a:r/w:772:-1:0:0:0:-1:0:1:1:104722432:4096:25567:0:25567:Ybuqv7-pPcl-5ROM-QSLa-vC1Q-zAQR-Tk5C43 >[cmdexec] DEBUG 2018/06/08 08:38:16 /src/github.com/heketi/heketi/executors/cmdexec/device.go:147: Size of /dev/sdf in dhcp47-76.lab.eng.blr.redhat.com is 104722432 >[heketi] INFO 2018/06/08 08:38:16 Added device /dev/sdf >[asynchttp] INFO 2018/06/08 08:38:16 asynchttp.go:292: Completed job 68a9e009b9ea3887ddbadce2e33271a4 in 523.793649ms >[negroni] Started GET /queue/68a9e009b9ea3887ddbadce2e33271a4 >[negroni] Completed 204 No Content in 133.618µs >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 4.578965ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 2.100891ms >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 906.679µs >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 1.443692ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.480624ms >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 1.163396ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 939.89µs >[negroni] Started POST /devices/624b4458a2f2db51030873f216ff446a/state >[negroni] Completed 202 Accepted in 408.501µs >[asynchttp] INFO 2018/06/08 08:38:17 asynchttp.go:288: Started job 1a86a861f6d7007cdd7d7a31bc160dcf >[negroni] Started GET /queue/1a86a861f6d7007cdd7d7a31bc160dcf >[negroni] Completed 200 OK in 111.302µs >[asynchttp] INFO 2018/06/08 08:38:17 asynchttp.go:292: Completed job 1a86a861f6d7007cdd7d7a31bc160dcf in 8.583594ms >[negroni] Started GET /queue/1a86a861f6d7007cdd7d7a31bc160dcf >[negroni] Completed 204 No Content in 228.808µs >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 7.588813ms >[negroni] Started POST /devices/c7076a74d8fd3bf1c2dcfc77b8d01ea1/state >[negroni] Completed 202 Accepted in 607.292µs >[asynchttp] INFO 2018/06/08 08:38:18 asynchttp.go:288: Started job 05c736ee53189ef6a0e5643a96855daa >[negroni] Started GET /queue/05c736ee53189ef6a0e5643a96855daa >[negroni] Completed 200 OK in 89.161µs >[asynchttp] INFO 2018/06/08 08:38:18 asynchttp.go:292: Completed job 05c736ee53189ef6a0e5643a96855daa in 9.936291ms >[negroni] Started GET /queue/05c736ee53189ef6a0e5643a96855daa >[negroni] Completed 204 No Content in 169.757µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 5.178057ms >[negroni] Started POST /devices/6ecedef27ecea4a51dc0dbaeb83f2328/state >[negroni] Completed 202 Accepted in 423.681µs >[asynchttp] INFO 2018/06/08 08:38:20 asynchttp.go:288: Started job 3d537edd99cd8b0677af33799374ed9f >[negroni] Started GET /queue/3d537edd99cd8b0677af33799374ed9f >[negroni] Completed 200 OK in 121.402µs >[asynchttp] INFO 2018/06/08 08:38:20 asynchttp.go:292: Completed job 3d537edd99cd8b0677af33799374ed9f in 7.126426ms >[negroni] Started GET /queue/3d537edd99cd8b0677af33799374ed9f >[negroni] Completed 204 No Content in 150.491µs >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:38:21 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:38:21 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:38:21 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:38:21 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:38:21 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:38:21 Allocating brick set #2 >[heketi] INFO 2018/06/08 08:38:21 Allocating brick set #3 >[heketi] INFO 2018/06/08 08:38:21 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:38:21 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:38:21 Allocating brick set #2 >[heketi] INFO 2018/06/08 08:38:21 Allocating brick set #3 >[heketi] INFO 2018/06/08 08:38:21 Allocating brick set #4 >[heketi] INFO 2018/06/08 08:38:21 Allocating brick set #5 >[heketi] INFO 2018/06/08 08:38:21 Allocating brick set #6 >[heketi] ERROR 2018/06/08 08:38:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1167: Create Volume Build Failed: No space >[negroni] Completed 500 Internal Server Error in 23.548071ms >[negroni] Started POST /devices/624b4458a2f2db51030873f216ff446a/state >[negroni] Completed 202 Accepted in 634.885µs >[asynchttp] INFO 2018/06/08 08:38:21 asynchttp.go:288: Started job b426ac62da12f030d840bac94e0689a1 >[asynchttp] INFO 2018/06/08 08:38:21 asynchttp.go:292: Completed job b426ac62da12f030d840bac94e0689a1 in 1.523µs >[negroni] Started GET /queue/b426ac62da12f030d840bac94e0689a1 >[negroni] Completed 204 No Content in 96.763µs >[negroni] Started POST /devices/624b4458a2f2db51030873f216ff446a/state >[negroni] Completed 202 Accepted in 592.459µs >[asynchttp] INFO 2018/06/08 08:38:21 asynchttp.go:288: Started job 92b3085a0d6c33e8d55baad007a33eea >[heketi] INFO 2018/06/08 08:38:21 Running Remove Device >[negroni] Started GET /queue/92b3085a0d6c33e8d55baad007a33eea >[negroni] Completed 200 OK in 156.031µs >[asynchttp] INFO 2018/06/08 08:38:21 asynchttp.go:292: Completed job 92b3085a0d6c33e8d55baad007a33eea in 13.197973ms >[negroni] Started GET /queue/92b3085a0d6c33e8d55baad007a33eea >[negroni] Completed 204 No Content in 141.439µs >[negroni] Started DELETE /devices/624b4458a2f2db51030873f216ff446a >[heketi] INFO 2018/06/08 08:38:22 Deleting device 624b4458a2f2db51030873f216ff446a on node d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 202 Accepted in 1.197344ms >[asynchttp] INFO 2018/06/08 08:38:22 asynchttp.go:288: Started job 6b4b160437a9518ab20c9d2c78893580 >[negroni] Started GET /queue/6b4b160437a9518ab20c9d2c78893580 >[negroni] Completed 200 OK in 89.561µs >[kubeexec] DEBUG 2018/06/08 08:38:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: vgremove vg_624b4458a2f2db51030873f216ff446a >Result: Volume group "vg_624b4458a2f2db51030873f216ff446a" successfully removed >[kubeexec] DEBUG 2018/06/08 08:38:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: pvremove '/dev/sdf' >Result: Labels on physical volume "/dev/sdf" successfully wiped. >[kubeexec] ERROR 2018/06/08 08:38:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [ls /var/lib/heketi/mounts/vg_624b4458a2f2db51030873f216ff446a] on glusterfs-storage-gxp7c: Err[command terminated with exit code 2]: Stdout []: Stderr [ls: cannot access /var/lib/heketi/mounts/vg_624b4458a2f2db51030873f216ff446a: No such file or directory >] >[heketi] INFO 2018/06/08 08:38:23 Deleted node [624b4458a2f2db51030873f216ff446a] >[asynchttp] INFO 2018/06/08 08:38:23 asynchttp.go:292: Completed job 6b4b160437a9518ab20c9d2c78893580 in 481.428176ms >[negroni] Started GET /queue/6b4b160437a9518ab20c9d2c78893580 >[negroni] Completed 204 No Content in 138.896µs >[negroni] Started POST /devices/c7076a74d8fd3bf1c2dcfc77b8d01ea1/state >[negroni] Completed 202 Accepted in 4.146682ms >[asynchttp] INFO 2018/06/08 08:38:23 asynchttp.go:288: Started job d6875f6fe15b0c1446f8debfa6be6bdf >[asynchttp] INFO 2018/06/08 08:38:23 asynchttp.go:292: Completed job d6875f6fe15b0c1446f8debfa6be6bdf in 1.353µs >[negroni] Started GET /queue/d6875f6fe15b0c1446f8debfa6be6bdf >[negroni] Completed 204 No Content in 189.901µs >[negroni] Started POST /devices/c7076a74d8fd3bf1c2dcfc77b8d01ea1/state >[negroni] Completed 202 Accepted in 814.794µs >[asynchttp] INFO 2018/06/08 08:38:23 asynchttp.go:288: Started job 6aeeac5db08de10d8a39ffe690f1922e >[heketi] INFO 2018/06/08 08:38:23 Running Remove Device >[negroni] Started GET /queue/6aeeac5db08de10d8a39ffe690f1922e >[negroni] Completed 200 OK in 234.338µs >[asynchttp] INFO 2018/06/08 08:38:23 asynchttp.go:292: Completed job 6aeeac5db08de10d8a39ffe690f1922e in 13.033451ms >[negroni] Started GET /queue/6aeeac5db08de10d8a39ffe690f1922e >[negroni] Completed 204 No Content in 182.614µs >[negroni] Started DELETE /devices/c7076a74d8fd3bf1c2dcfc77b8d01ea1 >[heketi] INFO 2018/06/08 08:38:24 Deleting device c7076a74d8fd3bf1c2dcfc77b8d01ea1 on node 70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 202 Accepted in 1.586374ms >[asynchttp] INFO 2018/06/08 08:38:24 asynchttp.go:288: Started job a4d1b58b4a5fd2efe6dc96b8e1fd6fb7 >[negroni] Started GET /queue/a4d1b58b4a5fd2efe6dc96b8e1fd6fb7 >[negroni] Completed 200 OK in 181.922µs >[kubeexec] DEBUG 2018/06/08 08:38:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: vgremove vg_c7076a74d8fd3bf1c2dcfc77b8d01ea1 >Result: Volume group "vg_c7076a74d8fd3bf1c2dcfc77b8d01ea1" successfully removed >[kubeexec] DEBUG 2018/06/08 08:38:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: pvremove '/dev/sdf' >Result: Labels on physical volume "/dev/sdf" successfully wiped. >[kubeexec] ERROR 2018/06/08 08:38:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [ls /var/lib/heketi/mounts/vg_c7076a74d8fd3bf1c2dcfc77b8d01ea1] on glusterfs-storage-pg4xc: Err[command terminated with exit code 2]: Stdout []: Stderr [ls: cannot access /var/lib/heketi/mounts/vg_c7076a74d8fd3bf1c2dcfc77b8d01ea1: No such file or directory >] >[heketi] INFO 2018/06/08 08:38:25 Deleted node [c7076a74d8fd3bf1c2dcfc77b8d01ea1] >[asynchttp] INFO 2018/06/08 08:38:25 asynchttp.go:292: Completed job a4d1b58b4a5fd2efe6dc96b8e1fd6fb7 in 479.657523ms >[negroni] Started GET /queue/a4d1b58b4a5fd2efe6dc96b8e1fd6fb7 >[negroni] Completed 204 No Content in 130.335µs >[negroni] Started POST /devices/6ecedef27ecea4a51dc0dbaeb83f2328/state >[negroni] Completed 202 Accepted in 3.167729ms >[asynchttp] INFO 2018/06/08 08:38:26 asynchttp.go:288: Started job 563dd17fee8b606b6686492f966eed4f >[asynchttp] INFO 2018/06/08 08:38:26 asynchttp.go:292: Completed job 563dd17fee8b606b6686492f966eed4f in 1.659µs >[negroni] Started GET /queue/563dd17fee8b606b6686492f966eed4f >[negroni] Completed 204 No Content in 128.807µs >[negroni] Started POST /devices/6ecedef27ecea4a51dc0dbaeb83f2328/state >[negroni] Completed 202 Accepted in 668.542µs >[asynchttp] INFO 2018/06/08 08:38:26 asynchttp.go:288: Started job c982581f2804fbd634da05c14e3b47e8 >[heketi] INFO 2018/06/08 08:38:26 Running Remove Device >[negroni] Started GET /queue/c982581f2804fbd634da05c14e3b47e8 >[negroni] Completed 200 OK in 147.166µs >[asynchttp] INFO 2018/06/08 08:38:26 asynchttp.go:292: Completed job c982581f2804fbd634da05c14e3b47e8 in 13.567465ms >[negroni] Started GET /queue/c982581f2804fbd634da05c14e3b47e8 >[negroni] Completed 204 No Content in 232.019µs >[negroni] Started DELETE /devices/6ecedef27ecea4a51dc0dbaeb83f2328 >[heketi] INFO 2018/06/08 08:38:27 Deleting device 6ecedef27ecea4a51dc0dbaeb83f2328 on node 278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 202 Accepted in 1.63327ms >[asynchttp] INFO 2018/06/08 08:38:27 asynchttp.go:288: Started job 71a435fc72b367a50501bf2ebd9af6cf >[negroni] Started GET /queue/71a435fc72b367a50501bf2ebd9af6cf >[negroni] Completed 200 OK in 252.281µs >[kubeexec] DEBUG 2018/06/08 08:38:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: vgremove vg_6ecedef27ecea4a51dc0dbaeb83f2328 >Result: Volume group "vg_6ecedef27ecea4a51dc0dbaeb83f2328" successfully removed >[kubeexec] DEBUG 2018/06/08 08:38:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: pvremove '/dev/sdf' >Result: Labels on physical volume "/dev/sdf" successfully wiped. >[kubeexec] ERROR 2018/06/08 08:38:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [ls /var/lib/heketi/mounts/vg_6ecedef27ecea4a51dc0dbaeb83f2328] on glusterfs-storage-vsh2m: Err[command terminated with exit code 2]: Stdout []: Stderr [ls: cannot access /var/lib/heketi/mounts/vg_6ecedef27ecea4a51dc0dbaeb83f2328: No such file or directory >] >[heketi] INFO 2018/06/08 08:38:27 Deleted node [6ecedef27ecea4a51dc0dbaeb83f2328] >[asynchttp] INFO 2018/06/08 08:38:27 asynchttp.go:292: Completed job 71a435fc72b367a50501bf2ebd9af6cf in 525.934022ms >[negroni] Started GET /queue/71a435fc72b367a50501bf2ebd9af6cf >[negroni] Completed 204 No Content in 176.575µs >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:38:28 Allocating brick set #0 >[negroni] Completed 202 Accepted in 19.088005ms >[asynchttp] INFO 2018/06/08 08:38:28 asynchttp.go:288: Started job 065ad33878d141c01bb83ce000d7984d >[heketi] INFO 2018/06/08 08:38:28 Started async operation: Create Volume >[negroni] Started GET /queue/065ad33878d141c01bb83ce000d7984d >[negroni] Completed 200 OK in 116.427µs >[heketi] INFO 2018/06/08 08:38:28 Creating brick 88c50c0f97424a3116d7ecf9870b0f3e >[heketi] INFO 2018/06/08 08:38:28 Creating brick e106612b2d23de31c6b364829b33a4a9 >[heketi] INFO 2018/06/08 08:38:28 Creating brick de28e8f1db02df454626ab7304cd17b6 >[kubeexec] DEBUG 2018/06/08 08:38:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_de28e8f1db02df454626ab7304cd17b6 >Result: >[kubeexec] DEBUG 2018/06/08 08:38:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_88c50c0f97424a3116d7ecf9870b0f3e >Result: >[kubeexec] DEBUG 2018/06/08 08:38:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e106612b2d23de31c6b364829b33a4a9 >Result: >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_d389f0278a774bd7443a09af960961d8/tp_88c50c0f97424a3116d7ecf9870b0f3e --virtualsize 10485760K --name brick_88c50c0f97424a3116d7ecf9870b0f3e >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_88c50c0f97424a3116d7ecf9870b0f3e" created. >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_de28e8f1db02df454626ab7304cd17b6 --virtualsize 10485760K --name brick_de28e8f1db02df454626ab7304cd17b6 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_de28e8f1db02df454626ab7304cd17b6" created. >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 53248K --chunksize 256K --size 10485760K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_e106612b2d23de31c6b364829b33a4a9 --virtualsize 10485760K --name brick_e106612b2d23de31c6b364829b33a4a9 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_e106612b2d23de31c6b364829b33a4a9" created. >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e106612b2d23de31c6b364829b33a4a9 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e106612b2d23de31c6b364829b33a4a9 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_de28e8f1db02df454626ab7304cd17b6 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_de28e8f1db02df454626ab7304cd17b6 isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_88c50c0f97424a3116d7ecf9870b0f3e >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_88c50c0f97424a3116d7ecf9870b0f3e isize=512 agcount=16, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=2621440, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e106612b2d23de31c6b364829b33a4a9 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e106612b2d23de31c6b364829b33a4a9 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_de28e8f1db02df454626ab7304cd17b6 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_de28e8f1db02df454626ab7304cd17b6 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_88c50c0f97424a3116d7ecf9870b0f3e /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_88c50c0f97424a3116d7ecf9870b0f3e xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e106612b2d23de31c6b364829b33a4a9 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e106612b2d23de31c6b364829b33a4a9 >Result: >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_88c50c0f97424a3116d7ecf9870b0f3e /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_88c50c0f97424a3116d7ecf9870b0f3e >Result: >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_de28e8f1db02df454626ab7304cd17b6 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_de28e8f1db02df454626ab7304cd17b6 >Result: >[negroni] Started GET /queue/065ad33878d141c01bb83ce000d7984d >[negroni] Completed 200 OK in 127.976µs >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e106612b2d23de31c6b364829b33a4a9/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_de28e8f1db02df454626ab7304cd17b6/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_88c50c0f97424a3116d7ecf9870b0f3e/brick >Result: >[cmdexec] INFO 2018/06/08 08:38:29 Creating volume vol_4c71beb63ff6abb18f5ad259cbb0a53a replica 3 >[kubeexec] DEBUG 2018/06/08 08:38:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume create vol_4c71beb63ff6abb18f5ad259cbb0a53a replica 3 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_88c50c0f97424a3116d7ecf9870b0f3e/brick 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_de28e8f1db02df454626ab7304cd17b6/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e106612b2d23de31c6b364829b33a4a9/brick >Result: volume create: vol_4c71beb63ff6abb18f5ad259cbb0a53a: success: please start the volume to access data >[negroni] Started GET /queue/065ad33878d141c01bb83ce000d7984d >[negroni] Completed 200 OK in 134.025µs >[negroni] Started GET /queue/065ad33878d141c01bb83ce000d7984d >[negroni] Completed 200 OK in 134.287µs >[negroni] Started GET /queue/065ad33878d141c01bb83ce000d7984d >[negroni] Completed 200 OK in 133.463µs >[negroni] Started GET /queue/065ad33878d141c01bb83ce000d7984d >[negroni] Completed 200 OK in 133.276µs >[kubeexec] DEBUG 2018/06/08 08:38:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume start vol_4c71beb63ff6abb18f5ad259cbb0a53a >Result: volume start: vol_4c71beb63ff6abb18f5ad259cbb0a53a: success >[heketi] INFO 2018/06/08 08:38:34 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:38:34 asynchttp.go:292: Completed job 065ad33878d141c01bb83ce000d7984d in 5.623403765s >[negroni] Started GET /queue/065ad33878d141c01bb83ce000d7984d >[negroni] Completed 303 See Other in 217.954µs >[negroni] Started GET /volumes/4c71beb63ff6abb18f5ad259cbb0a53a >[negroni] Completed 200 OK in 5.362082ms >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 185.627µs >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 335.325µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 2.714256ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.592671ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 894.631µs >[negroni] Started GET /volumes/4c71beb63ff6abb18f5ad259cbb0a53a >[negroni] Completed 200 OK in 733.605µs >[negroni] Started POST /volumes/4c71beb63ff6abb18f5ad259cbb0a53a/expand >[heketi] INFO 2018/06/08 08:38:36 Allocating brick set #0 >[negroni] Completed 202 Accepted in 12.804927ms >[asynchttp] INFO 2018/06/08 08:38:36 asynchttp.go:288: Started job 680c1f99b6d8dd2df4c9a85721e53eb2 >[heketi] INFO 2018/06/08 08:38:36 Started async operation: Expand Volume >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 105.034µs >[heketi] INFO 2018/06/08 08:38:36 Creating brick f68fd071255bef02a0f31732a1eec3a3 >[heketi] INFO 2018/06/08 08:38:36 Creating brick 2ffaceaa46888863bcb20dfba9c6c722 >[heketi] INFO 2018/06/08 08:38:36 Creating brick 53721c3d4de0bf661138394e5374172c >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_2ffaceaa46888863bcb20dfba9c6c722 >Result: >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_53721c3d4de0bf661138394e5374172c >Result: >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_f68fd071255bef02a0f31732a1eec3a3 >Result: >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 28672K --chunksize 256K --size 5242880K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_2ffaceaa46888863bcb20dfba9c6c722 --virtualsize 5242880K --name brick_2ffaceaa46888863bcb20dfba9c6c722 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_2ffaceaa46888863bcb20dfba9c6c722" created. >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 28672K --chunksize 256K --size 5242880K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_f68fd071255bef02a0f31732a1eec3a3 --virtualsize 5242880K --name brick_f68fd071255bef02a0f31732a1eec3a3 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_f68fd071255bef02a0f31732a1eec3a3" created. >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 28672K --chunksize 256K --size 5242880K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_53721c3d4de0bf661138394e5374172c --virtualsize 5242880K --name brick_53721c3d4de0bf661138394e5374172c >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_53721c3d4de0bf661138394e5374172c" created. >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_2ffaceaa46888863bcb20dfba9c6c722 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_2ffaceaa46888863bcb20dfba9c6c722 isize=512 agcount=8, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=1310720, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_53721c3d4de0bf661138394e5374172c >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_53721c3d4de0bf661138394e5374172c isize=512 agcount=8, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=1310720, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_f68fd071255bef02a0f31732a1eec3a3 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_f68fd071255bef02a0f31732a1eec3a3 isize=512 agcount=8, agsize=163840 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=1310720, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_2ffaceaa46888863bcb20dfba9c6c722 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_2ffaceaa46888863bcb20dfba9c6c722 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_53721c3d4de0bf661138394e5374172c /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_53721c3d4de0bf661138394e5374172c xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_f68fd071255bef02a0f31732a1eec3a3 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_f68fd071255bef02a0f31732a1eec3a3 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_2ffaceaa46888863bcb20dfba9c6c722 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_2ffaceaa46888863bcb20dfba9c6c722 >Result: >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_53721c3d4de0bf661138394e5374172c /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_53721c3d4de0bf661138394e5374172c >Result: >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_f68fd071255bef02a0f31732a1eec3a3 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_f68fd071255bef02a0f31732a1eec3a3 >Result: >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_2ffaceaa46888863bcb20dfba9c6c722/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_53721c3d4de0bf661138394e5374172c/brick >Result: >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 130.899µs >[kubeexec] DEBUG 2018/06/08 08:38:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_f68fd071255bef02a0f31732a1eec3a3/brick >Result: >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 201.445µs >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 157.678µs >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 150.198µs >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 281.951µs >[kubeexec] DEBUG 2018/06/08 08:38:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume add-brick vol_4c71beb63ff6abb18f5ad259cbb0a53a 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_f68fd071255bef02a0f31732a1eec3a3/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_53721c3d4de0bf661138394e5374172c/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_2ffaceaa46888863bcb20dfba9c6c722/brick >Result: volume add-brick: success >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 245.915µs >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 131.308µs >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 162.346µs >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 212.625µs >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 214.889µs >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 226.111µs >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 230.645µs >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 178.388µs >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 338.767µs >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 200 OK in 221.181µs >[kubeexec] DEBUG 2018/06/08 08:38:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume rebalance vol_4c71beb63ff6abb18f5ad259cbb0a53a start >Result: volume rebalance: vol_4c71beb63ff6abb18f5ad259cbb0a53a: success: Rebalance on vol_4c71beb63ff6abb18f5ad259cbb0a53a has been started successfully. Use rebalance status command to check status of the rebalance process. >ID: bd0a41ba-c7e1-4cf8-b736-e4424281746f > >[heketi] INFO 2018/06/08 08:38:52 Expand Volume succeeded >[asynchttp] INFO 2018/06/08 08:38:52 asynchttp.go:292: Completed job 680c1f99b6d8dd2df4c9a85721e53eb2 in 15.690946098s >[negroni] Started GET /queue/680c1f99b6d8dd2df4c9a85721e53eb2 >[negroni] Completed 303 See Other in 172.433µs >[negroni] Started GET /volumes/4c71beb63ff6abb18f5ad259cbb0a53a >[negroni] Completed 200 OK in 7.496832ms >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 243.818µs >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 385.039µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 3.990933ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 2.636697ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 1.671794ms >[negroni] Started GET /volumes/4c71beb63ff6abb18f5ad259cbb0a53a >[negroni] Completed 200 OK in 1.625615ms >[negroni] Started DELETE /volumes/4c71beb63ff6abb18f5ad259cbb0a53a >[negroni] Completed 202 Accepted in 12.042941ms >[asynchttp] INFO 2018/06/08 08:38:56 asynchttp.go:288: Started job 4c80322fad4a5f64b73a0caab03c74c2 >[heketi] INFO 2018/06/08 08:38:56 Started async operation: Delete Volume >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 114.618µs >[kubeexec] DEBUG 2018/06/08 08:38:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script snapshot list vol_4c71beb63ff6abb18f5ad259cbb0a53a --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 215.402µs >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 216.558µs >[kubeexec] DEBUG 2018/06/08 08:38:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume stop vol_4c71beb63ff6abb18f5ad259cbb0a53a force >Result: volume stop: vol_4c71beb63ff6abb18f5ad259cbb0a53a: success >[kubeexec] DEBUG 2018/06/08 08:38:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume delete vol_4c71beb63ff6abb18f5ad259cbb0a53a >Result: volume delete: vol_4c71beb63ff6abb18f5ad259cbb0a53a: success >[heketi] INFO 2018/06/08 08:38:59 Deleting brick de28e8f1db02df454626ab7304cd17b6 >[heketi] INFO 2018/06/08 08:38:59 Deleting brick 2ffaceaa46888863bcb20dfba9c6c722 >[heketi] INFO 2018/06/08 08:38:59 Deleting brick 53721c3d4de0bf661138394e5374172c >[heketi] INFO 2018/06/08 08:38:59 Deleting brick e106612b2d23de31c6b364829b33a4a9 >[heketi] INFO 2018/06/08 08:38:59 Deleting brick f68fd071255bef02a0f31732a1eec3a3 >[heketi] INFO 2018/06/08 08:38:59 Deleting brick 88c50c0f97424a3116d7ecf9870b0f3e >[kubeexec] DEBUG 2018/06/08 08:38:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_de28e8f1db02df454626ab7304cd17b6 | cut -d" " -f1 >Result: /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_de28e8f1db02df454626ab7304cd17b6 >[kubeexec] DEBUG 2018/06/08 08:38:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_53721c3d4de0bf661138394e5374172c | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_53721c3d4de0bf661138394e5374172c >[kubeexec] DEBUG 2018/06/08 08:38:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_2ffaceaa46888863bcb20dfba9c6c722 | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_2ffaceaa46888863bcb20dfba9c6c722 >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 221.045µs >[kubeexec] DEBUG 2018/06/08 08:38:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e106612b2d23de31c6b364829b33a4a9 | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e106612b2d23de31c6b364829b33a4a9 >[kubeexec] DEBUG 2018/06/08 08:38:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_f68fd071255bef02a0f31732a1eec3a3 | cut -d" " -f1 >Result: /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_f68fd071255bef02a0f31732a1eec3a3 >[kubeexec] DEBUG 2018/06/08 08:39:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_88c50c0f97424a3116d7ecf9870b0f3e | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_88c50c0f97424a3116d7ecf9870b0f3e >[kubeexec] DEBUG 2018/06/08 08:39:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_53721c3d4de0bf661138394e5374172c > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_53721c3d4de0bf661138394e5374172c >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 135.089µs >[kubeexec] DEBUG 2018/06/08 08:39:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_de28e8f1db02df454626ab7304cd17b6 > >Result: vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_de28e8f1db02df454626ab7304cd17b6 >[kubeexec] DEBUG 2018/06/08 08:39:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_2ffaceaa46888863bcb20dfba9c6c722 > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_2ffaceaa46888863bcb20dfba9c6c722 >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 131.266µs >[kubeexec] DEBUG 2018/06/08 08:39:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e106612b2d23de31c6b364829b33a4a9 > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_e106612b2d23de31c6b364829b33a4a9 >[kubeexec] DEBUG 2018/06/08 08:39:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_f68fd071255bef02a0f31732a1eec3a3 > >Result: vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_f68fd071255bef02a0f31732a1eec3a3 >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 356.061µs >[kubeexec] DEBUG 2018/06/08 08:39:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_88c50c0f97424a3116d7ecf9870b0f3e > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_88c50c0f97424a3116d7ecf9870b0f3e >[kubeexec] DEBUG 2018/06/08 08:39:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_53721c3d4de0bf661138394e5374172c >Result: >[kubeexec] DEBUG 2018/06/08 08:39:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_de28e8f1db02df454626ab7304cd17b6 >Result: >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 177.645µs >[kubeexec] DEBUG 2018/06/08 08:39:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_2ffaceaa46888863bcb20dfba9c6c722 >Result: >[kubeexec] DEBUG 2018/06/08 08:39:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e106612b2d23de31c6b364829b33a4a9 >Result: >[kubeexec] DEBUG 2018/06/08 08:39:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_f68fd071255bef02a0f31732a1eec3a3 >Result: >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 198.875µs >[kubeexec] DEBUG 2018/06/08 08:39:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_88c50c0f97424a3116d7ecf9870b0f3e >Result: >[kubeexec] DEBUG 2018/06/08 08:39:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_53721c3d4de0bf661138394e5374172c/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:39:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_de28e8f1db02df454626ab7304cd17b6/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 142.025µs >[kubeexec] DEBUG 2018/06/08 08:39:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_2ffaceaa46888863bcb20dfba9c6c722/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:39:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_e106612b2d23de31c6b364829b33a4a9/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 202.349µs >[kubeexec] DEBUG 2018/06/08 08:39:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_f68fd071255bef02a0f31732a1eec3a3/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:39:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_88c50c0f97424a3116d7ecf9870b0f3e/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 219.098µs >[kubeexec] DEBUG 2018/06/08 08:39:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_53721c3d4de0bf661138394e5374172c > >Result: Logical volume "brick_53721c3d4de0bf661138394e5374172c" successfully removed >[kubeexec] DEBUG 2018/06/08 08:39:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_de28e8f1db02df454626ab7304cd17b6 > >Result: Logical volume "brick_de28e8f1db02df454626ab7304cd17b6" successfully removed >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 160.855µs >[kubeexec] DEBUG 2018/06/08 08:39:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_2ffaceaa46888863bcb20dfba9c6c722 > >Result: Logical volume "brick_2ffaceaa46888863bcb20dfba9c6c722" successfully removed >[kubeexec] DEBUG 2018/06/08 08:39:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e106612b2d23de31c6b364829b33a4a9 > >Result: Logical volume "brick_e106612b2d23de31c6b364829b33a4a9" successfully removed >[kubeexec] DEBUG 2018/06/08 08:39:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_f68fd071255bef02a0f31732a1eec3a3 > >Result: Logical volume "brick_f68fd071255bef02a0f31732a1eec3a3" successfully removed >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 266.45µs >[kubeexec] DEBUG 2018/06/08 08:39:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_88c50c0f97424a3116d7ecf9870b0f3e > >Result: Logical volume "brick_88c50c0f97424a3116d7ecf9870b0f3e" successfully removed >[kubeexec] DEBUG 2018/06/08 08:39:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_53721c3d4de0bf661138394e5374172c > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:39:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_de28e8f1db02df454626ab7304cd17b6 > >Result: 0 >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 195.7µs >[kubeexec] DEBUG 2018/06/08 08:39:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_2ffaceaa46888863bcb20dfba9c6c722 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:39:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_e106612b2d23de31c6b364829b33a4a9 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:39:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_f68fd071255bef02a0f31732a1eec3a3 > >Result: 0 >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 156.714µs >[kubeexec] DEBUG 2018/06/08 08:39:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_88c50c0f97424a3116d7ecf9870b0f3e > >Result: 0 >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 146.359µs >[kubeexec] DEBUG 2018/06/08 08:39:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_53721c3d4de0bf661138394e5374172c > >Result: Logical volume "tp_53721c3d4de0bf661138394e5374172c" successfully removed >[kubeexec] DEBUG 2018/06/08 08:39:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_de28e8f1db02df454626ab7304cd17b6 > >Result: Logical volume "tp_de28e8f1db02df454626ab7304cd17b6" successfully removed >[kubeexec] DEBUG 2018/06/08 08:39:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_2ffaceaa46888863bcb20dfba9c6c722 > >Result: Logical volume "tp_2ffaceaa46888863bcb20dfba9c6c722" successfully removed >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 164.818µs >[kubeexec] DEBUG 2018/06/08 08:39:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_e106612b2d23de31c6b364829b33a4a9 > >Result: Logical volume "tp_e106612b2d23de31c6b364829b33a4a9" successfully removed >[kubeexec] DEBUG 2018/06/08 08:39:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_f68fd071255bef02a0f31732a1eec3a3 > >Result: Logical volume "tp_f68fd071255bef02a0f31732a1eec3a3" successfully removed >[heketi] INFO 2018/06/08 08:39:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:39:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 220.665µs >[kubeexec] DEBUG 2018/06/08 08:39:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_88c50c0f97424a3116d7ecf9870b0f3e > >Result: Logical volume "tp_88c50c0f97424a3116d7ecf9870b0f3e" successfully removed >[kubeexec] DEBUG 2018/06/08 08:39:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_53721c3d4de0bf661138394e5374172c >Result: >[kubeexec] DEBUG 2018/06/08 08:39:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_de28e8f1db02df454626ab7304cd17b6 >Result: >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 215.198µs >[kubeexec] DEBUG 2018/06/08 08:39:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_2ffaceaa46888863bcb20dfba9c6c722 >Result: >[kubeexec] DEBUG 2018/06/08 08:39:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e106612b2d23de31c6b364829b33a4a9 >Result: >[kubeexec] DEBUG 2018/06/08 08:39:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_f68fd071255bef02a0f31732a1eec3a3 >Result: >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 200 OK in 214.323µs >[kubeexec] DEBUG 2018/06/08 08:39:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 15min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ11892 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:39:16 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:39:16 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:39:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 15min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ12039 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:39:17 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:39:17 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:39:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_88c50c0f97424a3116d7ecf9870b0f3e >Result: >[heketi] INFO 2018/06/08 08:39:17 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:39:17 asynchttp.go:292: Completed job 4c80322fad4a5f64b73a0caab03c74c2 in 21.051068141s >[negroni] Started GET /queue/4c80322fad4a5f64b73a0caab03c74c2 >[negroni] Completed 204 No Content in 242.134µs >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 2.040568ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 482.276µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 6.934401ms >[kubeexec] DEBUG 2018/06/08 08:39:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 13min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ10345 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:39:18 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:39:18 Cleaned 0 nodes from health cache >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 2.388622ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 3.121456ms >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 908.335µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.136091ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 710.48µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.258828ms >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 150.882µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 124.466µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 127.266µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 87.133µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 85.074µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 79.618µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 54.664µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 126.224µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 99.066µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 121.055µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 94.311µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 115.706µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 132.679µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 67.59µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 47.642µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 49.557µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 136.136µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 142.9µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 106.699µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 100.489µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 137.379µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 130.053µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 146.979µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 191.131µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 90.736µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 68.559µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 54.344µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 76.897µs >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 98.85µs >[negroni] Completed 401 Unauthorized in 51.989µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 71.79µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 84.853µs >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 103.171µs >[negroni] Completed 401 Unauthorized in 67.285µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 178.836µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 148.089µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 81.364µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 46.169µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 58.222µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 43.287µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 127.246µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 66.499µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 90.786µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 68.65µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 160.454µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 132.312µs >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 142.997µs >[negroni] Completed 401 Unauthorized in 93.143µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 98.415µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 91.53µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 100.242µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 88.342µs >[heketi] INFO 2018/06/08 08:41:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:41:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:41:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 17min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ11892 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:41:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:41:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:41:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 17min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ12039 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:41:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:41:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:41:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 15min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ10345 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:41:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:41:14 Cleaned 0 nodes from health cache >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 180.426µs >[negroni] Completed 401 Unauthorized in 116.966µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 122.222µs >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 99.31µs >[negroni] Completed 401 Unauthorized in 96.233µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 99.012µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 123.42µs >[negroni] Started POST /volumes >[negroni] Completed 401 Unauthorized in 114.021µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 165.274µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 698.161µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.024369ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 610.34µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 182.045µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 746.191µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 748.226µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 604.155µs >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:41:33 Allocating brick set #0 >[negroni] Completed 202 Accepted in 12.884009ms >[asynchttp] INFO 2018/06/08 08:41:33 asynchttp.go:288: Started job e762df464aa012622774563dece1d0c2 >[heketi] INFO 2018/06/08 08:41:33 Started async operation: Create Volume >[negroni] Started GET /queue/e762df464aa012622774563dece1d0c2 >[negroni] Completed 200 OK in 147.179µs >[heketi] INFO 2018/06/08 08:41:33 Creating brick 4cb2a20a79c4ba5f51a7af73c72792d8 >[heketi] INFO 2018/06/08 08:41:33 Creating brick d093b076a37d3e3ba22944bd34ed0f9e >[heketi] INFO 2018/06/08 08:41:33 Creating brick 487a6210cd6beafaea7d458f0edc796f >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_487a6210cd6beafaea7d458f0edc796f >Result: >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d093b076a37d3e3ba22944bd34ed0f9e >Result: >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4cb2a20a79c4ba5f51a7af73c72792d8 >Result: >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:41:33 Allocating brick set #0 >[negroni] Completed 202 Accepted in 33.142296ms >[asynchttp] INFO 2018/06/08 08:41:33 asynchttp.go:288: Started job e6d8d4b63c9bfa929b21498f22af956e >[heketi] INFO 2018/06/08 08:41:33 Started async operation: Create Volume >[negroni] Started GET /queue/e6d8d4b63c9bfa929b21498f22af956e >[negroni] Completed 200 OK in 107.692µs >[heketi] INFO 2018/06/08 08:41:33 Creating brick ff2a5406a86cd765269a1b519b76a810 >[heketi] INFO 2018/06/08 08:41:33 Creating brick ac730765fa6d170eb90c9795eaf5a9c5 >[heketi] INFO 2018/06/08 08:41:33 Creating brick 300c73448ebb5017a216c6661178ca8c >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_487a6210cd6beafaea7d458f0edc796f --virtualsize 2097152K --name brick_487a6210cd6beafaea7d458f0edc796f >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_487a6210cd6beafaea7d458f0edc796f" created. >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_d093b076a37d3e3ba22944bd34ed0f9e --virtualsize 2097152K --name brick_d093b076a37d3e3ba22944bd34ed0f9e >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_d093b076a37d3e3ba22944bd34ed0f9e" created. >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_4cb2a20a79c4ba5f51a7af73c72792d8 --virtualsize 2097152K --name brick_4cb2a20a79c4ba5f51a7af73c72792d8 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_4cb2a20a79c4ba5f51a7af73c72792d8" created. >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_487a6210cd6beafaea7d458f0edc796f >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_487a6210cd6beafaea7d458f0edc796f isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d093b076a37d3e3ba22944bd34ed0f9e >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d093b076a37d3e3ba22944bd34ed0f9e isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4cb2a20a79c4ba5f51a7af73c72792d8 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4cb2a20a79c4ba5f51a7af73c72792d8 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_487a6210cd6beafaea7d458f0edc796f /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_487a6210cd6beafaea7d458f0edc796f xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d093b076a37d3e3ba22944bd34ed0f9e /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d093b076a37d3e3ba22944bd34ed0f9e xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4cb2a20a79c4ba5f51a7af73c72792d8 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4cb2a20a79c4ba5f51a7af73c72792d8 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d093b076a37d3e3ba22944bd34ed0f9e /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d093b076a37d3e3ba22944bd34ed0f9e >Result: >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_487a6210cd6beafaea7d458f0edc796f /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_487a6210cd6beafaea7d458f0edc796f >Result: >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4cb2a20a79c4ba5f51a7af73c72792d8 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4cb2a20a79c4ba5f51a7af73c72792d8 >Result: >[kubeexec] DEBUG 2018/06/08 08:41:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_487a6210cd6beafaea7d458f0edc796f/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d093b076a37d3e3ba22944bd34ed0f9e/brick >Result: >[negroni] Started GET /queue/e762df464aa012622774563dece1d0c2 >[negroni] Completed 200 OK in 102.786µs >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4cb2a20a79c4ba5f51a7af73c72792d8/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2000 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_487a6210cd6beafaea7d458f0edc796f/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2000 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d093b076a37d3e3ba22944bd34ed0f9e/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2000 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4cb2a20a79c4ba5f51a7af73c72792d8/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d093b076a37d3e3ba22944bd34ed0f9e/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_487a6210cd6beafaea7d458f0edc796f/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4cb2a20a79c4ba5f51a7af73c72792d8/brick >Result: >[cmdexec] INFO 2018/06/08 08:41:34 Creating volume vol_b1176cd4e68f9cd8d5ed06cbfd1d12b5 replica 3 >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_ff2a5406a86cd765269a1b519b76a810 >Result: >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ac730765fa6d170eb90c9795eaf5a9c5 >Result: >[negroni] Started GET /queue/e6d8d4b63c9bfa929b21498f22af956e >[negroni] Completed 200 OK in 104.554µs >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_300c73448ebb5017a216c6661178ca8c >Result: >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_ac730765fa6d170eb90c9795eaf5a9c5 --virtualsize 2097152K --name brick_ac730765fa6d170eb90c9795eaf5a9c5 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_ac730765fa6d170eb90c9795eaf5a9c5" created. >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_ff2a5406a86cd765269a1b519b76a810 --virtualsize 2097152K --name brick_ff2a5406a86cd765269a1b519b76a810 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_ff2a5406a86cd765269a1b519b76a810" created. >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_300c73448ebb5017a216c6661178ca8c --virtualsize 2097152K --name brick_300c73448ebb5017a216c6661178ca8c >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_300c73448ebb5017a216c6661178ca8c" created. >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_ff2a5406a86cd765269a1b519b76a810 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_ff2a5406a86cd765269a1b519b76a810 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_ac730765fa6d170eb90c9795eaf5a9c5 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_ac730765fa6d170eb90c9795eaf5a9c5 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_300c73448ebb5017a216c6661178ca8c >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_300c73448ebb5017a216c6661178ca8c isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_ff2a5406a86cd765269a1b519b76a810 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_ff2a5406a86cd765269a1b519b76a810 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_ac730765fa6d170eb90c9795eaf5a9c5 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ac730765fa6d170eb90c9795eaf5a9c5 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:41:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_300c73448ebb5017a216c6661178ca8c /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_300c73448ebb5017a216c6661178ca8c xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/e762df464aa012622774563dece1d0c2 >[negroni] Completed 200 OK in 127.075µs >[kubeexec] DEBUG 2018/06/08 08:41:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_ff2a5406a86cd765269a1b519b76a810 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_ff2a5406a86cd765269a1b519b76a810 >Result: >[kubeexec] DEBUG 2018/06/08 08:41:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_ac730765fa6d170eb90c9795eaf5a9c5 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ac730765fa6d170eb90c9795eaf5a9c5 >Result: >[kubeexec] DEBUG 2018/06/08 08:41:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_300c73448ebb5017a216c6661178ca8c /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_300c73448ebb5017a216c6661178ca8c >Result: >[kubeexec] DEBUG 2018/06/08 08:41:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_ff2a5406a86cd765269a1b519b76a810/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:41:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ac730765fa6d170eb90c9795eaf5a9c5/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:41:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_300c73448ebb5017a216c6661178ca8c/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:41:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2001 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_ff2a5406a86cd765269a1b519b76a810/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:41:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2001 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ac730765fa6d170eb90c9795eaf5a9c5/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:41:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2001 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_300c73448ebb5017a216c6661178ca8c/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:41:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_ff2a5406a86cd765269a1b519b76a810/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:41:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ac730765fa6d170eb90c9795eaf5a9c5/brick >Result: >[negroni] Started GET /queue/e6d8d4b63c9bfa929b21498f22af956e >[negroni] Completed 200 OK in 103.647µs >[kubeexec] DEBUG 2018/06/08 08:41:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_300c73448ebb5017a216c6661178ca8c/brick >Result: >[cmdexec] INFO 2018/06/08 08:41:35 Creating volume vol_82aebcd425bd89e0d33e6a9bbd1f000d replica 3 >[kubeexec] DEBUG 2018/06/08 08:41:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume create vol_82aebcd425bd89e0d33e6a9bbd1f000d replica 3 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_ff2a5406a86cd765269a1b519b76a810/brick 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ac730765fa6d170eb90c9795eaf5a9c5/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_300c73448ebb5017a216c6661178ca8c/brick >Result: volume create: vol_82aebcd425bd89e0d33e6a9bbd1f000d: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:41:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_b1176cd4e68f9cd8d5ed06cbfd1d12b5 replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4cb2a20a79c4ba5f51a7af73c72792d8/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d093b076a37d3e3ba22944bd34ed0f9e/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_487a6210cd6beafaea7d458f0edc796f/brick >Result: volume create: vol_b1176cd4e68f9cd8d5ed06cbfd1d12b5: success: please start the volume to access data >[negroni] Started GET /queue/e762df464aa012622774563dece1d0c2 >[negroni] Completed 200 OK in 122.062µs >[negroni] Started GET /queue/e6d8d4b63c9bfa929b21498f22af956e >[negroni] Completed 200 OK in 132.67µs >[negroni] Started GET /queue/e762df464aa012622774563dece1d0c2 >[negroni] Completed 200 OK in 104.531µs >[negroni] Started GET /queue/e6d8d4b63c9bfa929b21498f22af956e >[negroni] Completed 200 OK in 225.152µs >[negroni] Started GET /queue/e762df464aa012622774563dece1d0c2 >[negroni] Completed 200 OK in 114.528µs >[negroni] Started GET /queue/e6d8d4b63c9bfa929b21498f22af956e >[negroni] Completed 200 OK in 126.61µs >[kubeexec] DEBUG 2018/06/08 08:41:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume start vol_82aebcd425bd89e0d33e6a9bbd1f000d >Result: volume start: vol_82aebcd425bd89e0d33e6a9bbd1f000d: success >[heketi] INFO 2018/06/08 08:41:38 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:41:38 asynchttp.go:292: Completed job e6d8d4b63c9bfa929b21498f22af956e in 5.567575808s >[negroni] Started GET /queue/e762df464aa012622774563dece1d0c2 >[negroni] Completed 200 OK in 124.077µs >[negroni] Started GET /queue/e6d8d4b63c9bfa929b21498f22af956e >[negroni] Completed 303 See Other in 131.237µs >[negroni] Started GET /volumes/82aebcd425bd89e0d33e6a9bbd1f000d >[negroni] Completed 200 OK in 3.608472ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 381.366µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 1.866118ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.565505ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 753.805µs >[negroni] Started GET /queue/e762df464aa012622774563dece1d0c2 >[negroni] Completed 200 OK in 165.641µs >[negroni] Started GET /queue/e762df464aa012622774563dece1d0c2 >[negroni] Completed 200 OK in 134.804µs >[negroni] Started GET /queue/e762df464aa012622774563dece1d0c2 >[negroni] Completed 200 OK in 135.959µs >[kubeexec] DEBUG 2018/06/08 08:41:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_b1176cd4e68f9cd8d5ed06cbfd1d12b5 >Result: volume start: vol_b1176cd4e68f9cd8d5ed06cbfd1d12b5: success >[heketi] INFO 2018/06/08 08:41:42 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:41:42 asynchttp.go:292: Completed job e762df464aa012622774563dece1d0c2 in 9.937996305s >[negroni] Started GET /queue/e762df464aa012622774563dece1d0c2 >[negroni] Completed 303 See Other in 152.851µs >[negroni] Started GET /volumes/b1176cd4e68f9cd8d5ed06cbfd1d12b5 >[negroni] Completed 200 OK in 5.331127ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 218.784µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 2.644292ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.247719ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 767.683µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.065244ms >[negroni] Started GET /volumes/82aebcd425bd89e0d33e6a9bbd1f000d >[negroni] Completed 200 OK in 731.218µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 958.007µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.080655ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 816.114µs >[negroni] Started GET /volumes/b1176cd4e68f9cd8d5ed06cbfd1d12b5 >[negroni] Completed 200 OK in 642.058µs >[negroni] Started DELETE /volumes/82aebcd425bd89e0d33e6a9bbd1f000d >[negroni] Completed 202 Accepted in 13.504844ms >[asynchttp] INFO 2018/06/08 08:41:45 asynchttp.go:288: Started job 0a4cf62261c075ae3ed7e14a088725a5 >[heketi] INFO 2018/06/08 08:41:45 Started async operation: Delete Volume >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 155.272µs >[negroni] Started DELETE /volumes/b1176cd4e68f9cd8d5ed06cbfd1d12b5 >[negroni] Completed 202 Accepted in 12.556254ms >[asynchttp] INFO 2018/06/08 08:41:45 asynchttp.go:288: Started job 2ab3a3da6f9986f0807bddf5b0c58529 >[heketi] INFO 2018/06/08 08:41:45 Started async operation: Delete Volume >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 110.605µs >[kubeexec] DEBUG 2018/06/08 08:41:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_82aebcd425bd89e0d33e6a9bbd1f000d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[kubeexec] DEBUG 2018/06/08 08:41:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_b1176cd4e68f9cd8d5ed06cbfd1d12b5 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 131.927µs >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 168.983µs >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 190.251µs >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 137.632µs >[kubeexec] DEBUG 2018/06/08 08:41:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume stop vol_82aebcd425bd89e0d33e6a9bbd1f000d force >Result: volume stop: vol_82aebcd425bd89e0d33e6a9bbd1f000d: success >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 120.432µs >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 119.583µs >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 172.83µs >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 198.654µs >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 199.305µs >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 192.032µs >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 251.152µs >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 224.679µs >[kubeexec] DEBUG 2018/06/08 08:41:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume stop vol_b1176cd4e68f9cd8d5ed06cbfd1d12b5 force >Result: volume stop: vol_b1176cd4e68f9cd8d5ed06cbfd1d12b5: success >[kubeexec] DEBUG 2018/06/08 08:41:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume delete vol_82aebcd425bd89e0d33e6a9bbd1f000d >Result: volume delete: vol_82aebcd425bd89e0d33e6a9bbd1f000d: success >[heketi] INFO 2018/06/08 08:41:52 Deleting brick ff2a5406a86cd765269a1b519b76a810 >[heketi] INFO 2018/06/08 08:41:52 Deleting brick 300c73448ebb5017a216c6661178ca8c >[heketi] INFO 2018/06/08 08:41:52 Deleting brick ac730765fa6d170eb90c9795eaf5a9c5 >[kubeexec] DEBUG 2018/06/08 08:41:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_ff2a5406a86cd765269a1b519b76a810 | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_ff2a5406a86cd765269a1b519b76a810 >[kubeexec] DEBUG 2018/06/08 08:41:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_300c73448ebb5017a216c6661178ca8c | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_300c73448ebb5017a216c6661178ca8c >[kubeexec] DEBUG 2018/06/08 08:41:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_ff2a5406a86cd765269a1b519b76a810 > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_ff2a5406a86cd765269a1b519b76a810 >[kubeexec] DEBUG 2018/06/08 08:41:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume delete vol_b1176cd4e68f9cd8d5ed06cbfd1d12b5 >Result: volume delete: vol_b1176cd4e68f9cd8d5ed06cbfd1d12b5: success >[heketi] INFO 2018/06/08 08:41:52 Deleting brick 4cb2a20a79c4ba5f51a7af73c72792d8 >[heketi] INFO 2018/06/08 08:41:52 Deleting brick d093b076a37d3e3ba22944bd34ed0f9e >[heketi] INFO 2018/06/08 08:41:52 Deleting brick 487a6210cd6beafaea7d458f0edc796f >[kubeexec] DEBUG 2018/06/08 08:41:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_300c73448ebb5017a216c6661178ca8c > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_300c73448ebb5017a216c6661178ca8c >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 107.634µs >[kubeexec] DEBUG 2018/06/08 08:41:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ac730765fa6d170eb90c9795eaf5a9c5 | cut -d" " -f1 >Result: /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_ac730765fa6d170eb90c9795eaf5a9c5 >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 140.105µs >[kubeexec] DEBUG 2018/06/08 08:41:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_ff2a5406a86cd765269a1b519b76a810 >Result: >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 277.508µs >[kubeexec] DEBUG 2018/06/08 08:41:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4cb2a20a79c4ba5f51a7af73c72792d8 | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4cb2a20a79c4ba5f51a7af73c72792d8 >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 138.757µs >[kubeexec] DEBUG 2018/06/08 08:41:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_487a6210cd6beafaea7d458f0edc796f | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_487a6210cd6beafaea7d458f0edc796f >[kubeexec] DEBUG 2018/06/08 08:41:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d093b076a37d3e3ba22944bd34ed0f9e | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d093b076a37d3e3ba22944bd34ed0f9e >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 171.691µs >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 131.176µs >[kubeexec] DEBUG 2018/06/08 08:41:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_300c73448ebb5017a216c6661178ca8c >Result: >[kubeexec] DEBUG 2018/06/08 08:41:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_ac730765fa6d170eb90c9795eaf5a9c5 > >Result: vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_ac730765fa6d170eb90c9795eaf5a9c5 >[kubeexec] DEBUG 2018/06/08 08:41:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_ff2a5406a86cd765269a1b519b76a810/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 217.844µs >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 194.602µs >[kubeexec] DEBUG 2018/06/08 08:41:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4cb2a20a79c4ba5f51a7af73c72792d8 > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_4cb2a20a79c4ba5f51a7af73c72792d8 >[kubeexec] DEBUG 2018/06/08 08:41:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_487a6210cd6beafaea7d458f0edc796f > >Result: vg_3a4297677881963e3f80124971d50eea/tp_487a6210cd6beafaea7d458f0edc796f >[kubeexec] DEBUG 2018/06/08 08:41:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d093b076a37d3e3ba22944bd34ed0f9e > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_d093b076a37d3e3ba22944bd34ed0f9e >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 156.197µs >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 170.044µs >[kubeexec] DEBUG 2018/06/08 08:41:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_300c73448ebb5017a216c6661178ca8c/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:41:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ac730765fa6d170eb90c9795eaf5a9c5 >Result: >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 222.322µs >[kubeexec] DEBUG 2018/06/08 08:41:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_ff2a5406a86cd765269a1b519b76a810 > >Result: Logical volume "brick_ff2a5406a86cd765269a1b519b76a810" successfully removed >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 125.286µs >[kubeexec] DEBUG 2018/06/08 08:41:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4cb2a20a79c4ba5f51a7af73c72792d8 >Result: >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 186.37µs >[kubeexec] DEBUG 2018/06/08 08:41:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_487a6210cd6beafaea7d458f0edc796f >Result: >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 145.366µs >[kubeexec] DEBUG 2018/06/08 08:41:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d093b076a37d3e3ba22944bd34ed0f9e >Result: >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 170.982µs >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 178.699µs >[kubeexec] DEBUG 2018/06/08 08:41:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_300c73448ebb5017a216c6661178ca8c > >Result: Logical volume "brick_300c73448ebb5017a216c6661178ca8c" successfully removed >[kubeexec] DEBUG 2018/06/08 08:42:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_ac730765fa6d170eb90c9795eaf5a9c5/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:42:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_ff2a5406a86cd765269a1b519b76a810 > >Result: 0 >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 144.645µs >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 180.43µs >[kubeexec] DEBUG 2018/06/08 08:42:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_4cb2a20a79c4ba5f51a7af73c72792d8/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:42:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_487a6210cd6beafaea7d458f0edc796f/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:42:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_d093b076a37d3e3ba22944bd34ed0f9e/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 142.815µs >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 268.898µs >[kubeexec] DEBUG 2018/06/08 08:42:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_300c73448ebb5017a216c6661178ca8c > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:42:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_ac730765fa6d170eb90c9795eaf5a9c5 > >Result: Logical volume "brick_ac730765fa6d170eb90c9795eaf5a9c5" successfully removed >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 132.044µs >[kubeexec] DEBUG 2018/06/08 08:42:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_ff2a5406a86cd765269a1b519b76a810 > >Result: Logical volume "tp_ff2a5406a86cd765269a1b519b76a810" successfully removed >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 184.758µs >[kubeexec] DEBUG 2018/06/08 08:42:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4cb2a20a79c4ba5f51a7af73c72792d8 > >Result: Logical volume "brick_4cb2a20a79c4ba5f51a7af73c72792d8" successfully removed >[kubeexec] DEBUG 2018/06/08 08:42:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_487a6210cd6beafaea7d458f0edc796f > >Result: Logical volume "brick_487a6210cd6beafaea7d458f0edc796f" successfully removed >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 137.745µs >[kubeexec] DEBUG 2018/06/08 08:42:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d093b076a37d3e3ba22944bd34ed0f9e > >Result: Logical volume "brick_d093b076a37d3e3ba22944bd34ed0f9e" successfully removed >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 186.925µs >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 139.991µs >[kubeexec] DEBUG 2018/06/08 08:42:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_300c73448ebb5017a216c6661178ca8c > >Result: Logical volume "tp_300c73448ebb5017a216c6661178ca8c" successfully removed >[kubeexec] DEBUG 2018/06/08 08:42:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_ac730765fa6d170eb90c9795eaf5a9c5 > >Result: 0 >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 125.317µs >[kubeexec] DEBUG 2018/06/08 08:42:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_ff2a5406a86cd765269a1b519b76a810 >Result: >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 169.632µs >[kubeexec] DEBUG 2018/06/08 08:42:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_4cb2a20a79c4ba5f51a7af73c72792d8 > >Result: 0 >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 133.509µs >[kubeexec] DEBUG 2018/06/08 08:42:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_487a6210cd6beafaea7d458f0edc796f > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:42:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_d093b076a37d3e3ba22944bd34ed0f9e > >Result: 0 >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 178.672µs >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 207.046µs >[kubeexec] DEBUG 2018/06/08 08:42:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_300c73448ebb5017a216c6661178ca8c >Result: >[kubeexec] DEBUG 2018/06/08 08:42:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_ac730765fa6d170eb90c9795eaf5a9c5 > >Result: Logical volume "tp_ac730765fa6d170eb90c9795eaf5a9c5" successfully removed >[kubeexec] DEBUG 2018/06/08 08:42:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_d093b076a37d3e3ba22944bd34ed0f9e > >Result: Logical volume "tp_d093b076a37d3e3ba22944bd34ed0f9e" successfully removed >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 157.232µs >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 127.264µs >[kubeexec] DEBUG 2018/06/08 08:42:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_4cb2a20a79c4ba5f51a7af73c72792d8 > >Result: Logical volume "tp_4cb2a20a79c4ba5f51a7af73c72792d8" successfully removed >[kubeexec] DEBUG 2018/06/08 08:42:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_487a6210cd6beafaea7d458f0edc796f > >Result: Logical volume "tp_487a6210cd6beafaea7d458f0edc796f" successfully removed >[kubeexec] DEBUG 2018/06/08 08:42:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d093b076a37d3e3ba22944bd34ed0f9e >Result: >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 200 OK in 194.842µs >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 200 OK in 183.655µs >[kubeexec] DEBUG 2018/06/08 08:42:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4cb2a20a79c4ba5f51a7af73c72792d8 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ac730765fa6d170eb90c9795eaf5a9c5 >Result: >[heketi] INFO 2018/06/08 08:42:09 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:42:09 asynchttp.go:292: Completed job 0a4cf62261c075ae3ed7e14a088725a5 in 23.781212579s >[negroni] Started GET /queue/0a4cf62261c075ae3ed7e14a088725a5 >[negroni] Completed 204 No Content in 184.386µs >[negroni] Started DELETE /volumes/82aebcd425bd89e0d33e6a9bbd1f000d >[negroni] Completed 404 Not Found in 2.48331ms >[kubeexec] DEBUG 2018/06/08 08:42:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_487a6210cd6beafaea7d458f0edc796f >Result: >[heketi] INFO 2018/06/08 08:42:09 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:42:09 asynchttp.go:292: Completed job 2ab3a3da6f9986f0807bddf5b0c58529 in 23.980774213s >[negroni] Started GET /queue/2ab3a3da6f9986f0807bddf5b0c58529 >[negroni] Completed 204 No Content in 191.255µs >[negroni] Started DELETE /volumes/b1176cd4e68f9cd8d5ed06cbfd1d12b5 >[negroni] Completed 404 Not Found in 2.078887ms >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 199.768µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 2.758047ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 604.138µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.066534ms >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 236.192µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 679.256µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 595.592µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 513.18µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 181.753µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 634.022µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 744.733µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 625.994µs >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:42:10 Allocating brick set #0 >[negroni] Completed 202 Accepted in 13.369515ms >[asynchttp] INFO 2018/06/08 08:42:11 asynchttp.go:288: Started job 59515cc633de3ff6e62095c3ef71ec7b >[heketi] INFO 2018/06/08 08:42:11 Started async operation: Create Volume >[negroni] Started GET /queue/59515cc633de3ff6e62095c3ef71ec7b >[negroni] Completed 200 OK in 105.948µs >[heketi] INFO 2018/06/08 08:42:11 Creating brick d7770ed15229b256bf58c081ad57ea73 >[heketi] INFO 2018/06/08 08:42:11 Creating brick e5ae8fc12a741a851b2aade547fe3615 >[heketi] INFO 2018/06/08 08:42:11 Creating brick bfb72d4e8e105df407a890930c475175 >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e5ae8fc12a741a851b2aade547fe3615 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d7770ed15229b256bf58c081ad57ea73 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bfb72d4e8e105df407a890930c475175 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_bfb72d4e8e105df407a890930c475175 --virtualsize 2097152K --name brick_bfb72d4e8e105df407a890930c475175 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_bfb72d4e8e105df407a890930c475175" created. >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_e5ae8fc12a741a851b2aade547fe3615 --virtualsize 2097152K --name brick_e5ae8fc12a741a851b2aade547fe3615 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_e5ae8fc12a741a851b2aade547fe3615" created. >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_d7770ed15229b256bf58c081ad57ea73 --virtualsize 2097152K --name brick_d7770ed15229b256bf58c081ad57ea73 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_d7770ed15229b256bf58c081ad57ea73" created. >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_bfb72d4e8e105df407a890930c475175 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_bfb72d4e8e105df407a890930c475175 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e5ae8fc12a741a851b2aade547fe3615 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e5ae8fc12a741a851b2aade547fe3615 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_d7770ed15229b256bf58c081ad57ea73 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_d7770ed15229b256bf58c081ad57ea73 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_bfb72d4e8e105df407a890930c475175 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bfb72d4e8e105df407a890930c475175 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_d7770ed15229b256bf58c081ad57ea73 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d7770ed15229b256bf58c081ad57ea73 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e5ae8fc12a741a851b2aade547fe3615 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e5ae8fc12a741a851b2aade547fe3615 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_bfb72d4e8e105df407a890930c475175 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bfb72d4e8e105df407a890930c475175 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_d7770ed15229b256bf58c081ad57ea73 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d7770ed15229b256bf58c081ad57ea73 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e5ae8fc12a741a851b2aade547fe3615 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e5ae8fc12a741a851b2aade547fe3615 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bfb72d4e8e105df407a890930c475175/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e5ae8fc12a741a851b2aade547fe3615/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d7770ed15229b256bf58c081ad57ea73/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2000 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bfb72d4e8e105df407a890930c475175/brick >Result: >[negroni] Started GET /queue/59515cc633de3ff6e62095c3ef71ec7b >[negroni] Completed 200 OK in 103.52µs >[kubeexec] DEBUG 2018/06/08 08:42:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2000 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d7770ed15229b256bf58c081ad57ea73/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2000 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e5ae8fc12a741a851b2aade547fe3615/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bfb72d4e8e105df407a890930c475175/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d7770ed15229b256bf58c081ad57ea73/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e5ae8fc12a741a851b2aade547fe3615/brick >Result: >[cmdexec] INFO 2018/06/08 08:42:12 Creating volume vol_7851878e4341c93e517bff5cad8f7b49 replica 3 >[kubeexec] DEBUG 2018/06/08 08:42:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_7851878e4341c93e517bff5cad8f7b49 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bfb72d4e8e105df407a890930c475175/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e5ae8fc12a741a851b2aade547fe3615/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d7770ed15229b256bf58c081ad57ea73/brick >Result: volume create: vol_7851878e4341c93e517bff5cad8f7b49: success: please start the volume to access data >[negroni] Started GET /queue/59515cc633de3ff6e62095c3ef71ec7b >[negroni] Completed 200 OK in 114.191µs >[negroni] Started GET /queue/59515cc633de3ff6e62095c3ef71ec7b >[negroni] Completed 200 OK in 121.978µs >[negroni] Started GET /queue/59515cc633de3ff6e62095c3ef71ec7b >[negroni] Completed 200 OK in 118.228µs >[negroni] Started GET /queue/59515cc633de3ff6e62095c3ef71ec7b >[negroni] Completed 200 OK in 132.456µs >[kubeexec] DEBUG 2018/06/08 08:42:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_7851878e4341c93e517bff5cad8f7b49 >Result: volume start: vol_7851878e4341c93e517bff5cad8f7b49: success >[heketi] INFO 2018/06/08 08:42:16 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:42:16 asynchttp.go:292: Completed job 59515cc633de3ff6e62095c3ef71ec7b in 5.62399169s >[negroni] Started GET /queue/59515cc633de3ff6e62095c3ef71ec7b >[negroni] Completed 303 See Other in 139.381µs >[negroni] Started GET /volumes/7851878e4341c93e517bff5cad8f7b49 >[negroni] Completed 200 OK in 3.177048ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 181.198µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 2.306593ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.050894ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 965.488µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.655388ms >[negroni] Started GET /volumes/7851878e4341c93e517bff5cad8f7b49 >[negroni] Completed 200 OK in 1.184758ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 579.775µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 556.564µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 563.353µs >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:42:18 Allocating brick set #0 >[negroni] Completed 202 Accepted in 12.427785ms >[asynchttp] INFO 2018/06/08 08:42:18 asynchttp.go:288: Started job 45e38b9bc2afc427093a8aff2f3898c1 >[heketi] INFO 2018/06/08 08:42:18 Started async operation: Create Volume >[negroni] Started GET /queue/45e38b9bc2afc427093a8aff2f3898c1 >[negroni] Completed 200 OK in 98.937µs >[heketi] INFO 2018/06/08 08:42:18 Creating brick dc9c085778d516278c0cadca4db4be15 >[heketi] INFO 2018/06/08 08:42:18 Creating brick 5b2e140dcb360bbba091b9c24633aec7 >[heketi] INFO 2018/06/08 08:42:18 Creating brick 4c77a98309b18630c0618c89c97ba0e0 >[kubeexec] DEBUG 2018/06/08 08:42:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4c77a98309b18630c0618c89c97ba0e0 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dc9c085778d516278c0cadca4db4be15 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5b2e140dcb360bbba091b9c24633aec7 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_5b2e140dcb360bbba091b9c24633aec7 --virtualsize 2097152K --name brick_5b2e140dcb360bbba091b9c24633aec7 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_5b2e140dcb360bbba091b9c24633aec7" created. >[kubeexec] DEBUG 2018/06/08 08:42:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_4c77a98309b18630c0618c89c97ba0e0 --virtualsize 2097152K --name brick_4c77a98309b18630c0618c89c97ba0e0 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_4c77a98309b18630c0618c89c97ba0e0" created. >[kubeexec] DEBUG 2018/06/08 08:42:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_dc9c085778d516278c0cadca4db4be15 --virtualsize 2097152K --name brick_dc9c085778d516278c0cadca4db4be15 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_dc9c085778d516278c0cadca4db4be15" created. >[kubeexec] DEBUG 2018/06/08 08:42:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_5b2e140dcb360bbba091b9c24633aec7 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_5b2e140dcb360bbba091b9c24633aec7 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dc9c085778d516278c0cadca4db4be15 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dc9c085778d516278c0cadca4db4be15 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_5b2e140dcb360bbba091b9c24633aec7 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5b2e140dcb360bbba091b9c24633aec7 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dc9c085778d516278c0cadca4db4be15 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dc9c085778d516278c0cadca4db4be15 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_4c77a98309b18630c0618c89c97ba0e0 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_4c77a98309b18630c0618c89c97ba0e0 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_4c77a98309b18630c0618c89c97ba0e0 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4c77a98309b18630c0618c89c97ba0e0 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dc9c085778d516278c0cadca4db4be15 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dc9c085778d516278c0cadca4db4be15 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_5b2e140dcb360bbba091b9c24633aec7 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5b2e140dcb360bbba091b9c24633aec7 >Result: >[negroni] Started GET /queue/45e38b9bc2afc427093a8aff2f3898c1 >[negroni] Completed 200 OK in 178.119µs >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5b2e140dcb360bbba091b9c24633aec7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dc9c085778d516278c0cadca4db4be15/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_4c77a98309b18630c0618c89c97ba0e0 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4c77a98309b18630c0618c89c97ba0e0 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2001 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dc9c085778d516278c0cadca4db4be15/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2001 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5b2e140dcb360bbba091b9c24633aec7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4c77a98309b18630c0618c89c97ba0e0/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dc9c085778d516278c0cadca4db4be15/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5b2e140dcb360bbba091b9c24633aec7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2001 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4c77a98309b18630c0618c89c97ba0e0/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4c77a98309b18630c0618c89c97ba0e0/brick >Result: >[cmdexec] INFO 2018/06/08 08:42:19 Creating volume vol_48f9db6f6531b638a436926286f6eefb replica 3 >[kubeexec] DEBUG 2018/06/08 08:42:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_48f9db6f6531b638a436926286f6eefb replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5b2e140dcb360bbba091b9c24633aec7/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4c77a98309b18630c0618c89c97ba0e0/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dc9c085778d516278c0cadca4db4be15/brick >Result: volume create: vol_48f9db6f6531b638a436926286f6eefb: success: please start the volume to access data >[negroni] Started GET /queue/45e38b9bc2afc427093a8aff2f3898c1 >[negroni] Completed 200 OK in 166.343µs >[negroni] Started GET /queue/45e38b9bc2afc427093a8aff2f3898c1 >[negroni] Completed 200 OK in 144.698µs >[negroni] Started GET /queue/45e38b9bc2afc427093a8aff2f3898c1 >[negroni] Completed 200 OK in 140.676µs >[kubeexec] DEBUG 2018/06/08 08:42:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_48f9db6f6531b638a436926286f6eefb >Result: volume start: vol_48f9db6f6531b638a436926286f6eefb: success >[heketi] INFO 2018/06/08 08:42:23 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:42:23 asynchttp.go:292: Completed job 45e38b9bc2afc427093a8aff2f3898c1 in 4.808623056s >[negroni] Started GET /queue/45e38b9bc2afc427093a8aff2f3898c1 >[negroni] Completed 303 See Other in 257.988µs >[negroni] Started GET /volumes/48f9db6f6531b638a436926286f6eefb >[negroni] Completed 200 OK in 5.975968ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 398.081µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 4.60131ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 2.047219ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 1.35698ms >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 848.237µs >[negroni] Started GET /volumes/48f9db6f6531b638a436926286f6eefb >[negroni] Completed 200 OK in 610.21µs >[negroni] Started GET /volumes/7851878e4341c93e517bff5cad8f7b49 >[negroni] Completed 200 OK in 544.35µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 620.488µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 834.869µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.791059ms >[negroni] Started DELETE /volumes/48f9db6f6531b638a436926286f6eefb >[negroni] Completed 202 Accepted in 8.269575ms >[asynchttp] INFO 2018/06/08 08:42:25 asynchttp.go:288: Started job 84472ae559c45c9b84ab85d4a5852f91 >[heketi] INFO 2018/06/08 08:42:25 Started async operation: Delete Volume >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 135.666µs >[kubeexec] DEBUG 2018/06/08 08:42:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_48f9db6f6531b638a436926286f6eefb --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started DELETE /volumes/7851878e4341c93e517bff5cad8f7b49 >[negroni] Completed 202 Accepted in 11.408925ms >[asynchttp] INFO 2018/06/08 08:42:25 asynchttp.go:288: Started job f1e044bedf7f37fb57361f115a62841b >[heketi] INFO 2018/06/08 08:42:25 Started async operation: Delete Volume >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 94.73µs >[kubeexec] DEBUG 2018/06/08 08:42:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_7851878e4341c93e517bff5cad8f7b49 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 167.676µs >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 144.229µs >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 146.918µs >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 116.417µs >[kubeexec] DEBUG 2018/06/08 08:42:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume stop vol_48f9db6f6531b638a436926286f6eefb force >Result: volume stop: vol_48f9db6f6531b638a436926286f6eefb: success >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 116.47µs >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 142.23µs >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 218.451µs >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 123.573µs >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 139.351µs >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 141.994µs >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 138.308µs >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 193.055µs >[kubeexec] DEBUG 2018/06/08 08:42:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume stop vol_7851878e4341c93e517bff5cad8f7b49 force >Result: volume stop: vol_7851878e4341c93e517bff5cad8f7b49: success >[kubeexec] DEBUG 2018/06/08 08:42:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume delete vol_48f9db6f6531b638a436926286f6eefb >Result: volume delete: vol_48f9db6f6531b638a436926286f6eefb: success >[heketi] INFO 2018/06/08 08:42:32 Deleting brick 4c77a98309b18630c0618c89c97ba0e0 >[heketi] INFO 2018/06/08 08:42:32 Deleting brick 5b2e140dcb360bbba091b9c24633aec7 >[heketi] INFO 2018/06/08 08:42:32 Deleting brick dc9c085778d516278c0cadca4db4be15 >[kubeexec] DEBUG 2018/06/08 08:42:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dc9c085778d516278c0cadca4db4be15 | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dc9c085778d516278c0cadca4db4be15 >[kubeexec] DEBUG 2018/06/08 08:42:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4c77a98309b18630c0618c89c97ba0e0 | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_4c77a98309b18630c0618c89c97ba0e0 >[kubeexec] DEBUG 2018/06/08 08:42:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume delete vol_7851878e4341c93e517bff5cad8f7b49 >Result: volume delete: vol_7851878e4341c93e517bff5cad8f7b49: success >[heketi] INFO 2018/06/08 08:42:32 Deleting brick bfb72d4e8e105df407a890930c475175 >[heketi] INFO 2018/06/08 08:42:32 Deleting brick d7770ed15229b256bf58c081ad57ea73 >[heketi] INFO 2018/06/08 08:42:32 Deleting brick e5ae8fc12a741a851b2aade547fe3615 >[kubeexec] DEBUG 2018/06/08 08:42:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_4c77a98309b18630c0618c89c97ba0e0 > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_4c77a98309b18630c0618c89c97ba0e0 >[kubeexec] DEBUG 2018/06/08 08:42:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dc9c085778d516278c0cadca4db4be15 > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_dc9c085778d516278c0cadca4db4be15 >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 114.315µs >[kubeexec] DEBUG 2018/06/08 08:42:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5b2e140dcb360bbba091b9c24633aec7 | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_5b2e140dcb360bbba091b9c24633aec7 >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 161.809µs >[kubeexec] DEBUG 2018/06/08 08:42:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e5ae8fc12a741a851b2aade547fe3615 | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e5ae8fc12a741a851b2aade547fe3615 >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 145.816µs >[kubeexec] DEBUG 2018/06/08 08:42:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d7770ed15229b256bf58c081ad57ea73 | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_d7770ed15229b256bf58c081ad57ea73 >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 171.342µs >[kubeexec] DEBUG 2018/06/08 08:42:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bfb72d4e8e105df407a890930c475175 | cut -d" " -f1 >Result: /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_bfb72d4e8e105df407a890930c475175 >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 108.678µs >[kubeexec] DEBUG 2018/06/08 08:42:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4c77a98309b18630c0618c89c97ba0e0 >Result: >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 123.278µs >[kubeexec] DEBUG 2018/06/08 08:42:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dc9c085778d516278c0cadca4db4be15 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_5b2e140dcb360bbba091b9c24633aec7 > >Result: vg_3a4297677881963e3f80124971d50eea/tp_5b2e140dcb360bbba091b9c24633aec7 >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 173.072µs >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 111.529µs >[kubeexec] DEBUG 2018/06/08 08:42:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e5ae8fc12a741a851b2aade547fe3615 > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_e5ae8fc12a741a851b2aade547fe3615 >[kubeexec] DEBUG 2018/06/08 08:42:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_d7770ed15229b256bf58c081ad57ea73 > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_d7770ed15229b256bf58c081ad57ea73 >[kubeexec] DEBUG 2018/06/08 08:42:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_bfb72d4e8e105df407a890930c475175 > >Result: vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_bfb72d4e8e105df407a890930c475175 >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 130.875µs >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 190.245µs >[kubeexec] DEBUG 2018/06/08 08:42:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_4c77a98309b18630c0618c89c97ba0e0/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:42:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_dc9c085778d516278c0cadca4db4be15/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:42:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5b2e140dcb360bbba091b9c24633aec7 >Result: >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 164.855µs >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 118.761µs >[kubeexec] DEBUG 2018/06/08 08:42:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e5ae8fc12a741a851b2aade547fe3615 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d7770ed15229b256bf58c081ad57ea73 >Result: >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 159.016µs >[kubeexec] DEBUG 2018/06/08 08:42:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bfb72d4e8e105df407a890930c475175 >Result: >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 163.439µs >[kubeexec] DEBUG 2018/06/08 08:42:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_4c77a98309b18630c0618c89c97ba0e0 > >Result: Logical volume "brick_4c77a98309b18630c0618c89c97ba0e0" successfully removed >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 126.503µs >[kubeexec] DEBUG 2018/06/08 08:42:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dc9c085778d516278c0cadca4db4be15 > >Result: Logical volume "brick_dc9c085778d516278c0cadca4db4be15" successfully removed >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 108.059µs >[kubeexec] DEBUG 2018/06/08 08:42:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_5b2e140dcb360bbba091b9c24633aec7/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 125.235µs >[kubeexec] DEBUG 2018/06/08 08:42:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_e5ae8fc12a741a851b2aade547fe3615/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 106.963µs >[kubeexec] DEBUG 2018/06/08 08:42:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_d7770ed15229b256bf58c081ad57ea73/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:42:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_bfb72d4e8e105df407a890930c475175/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 220.133µs >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 171.973µs >[kubeexec] DEBUG 2018/06/08 08:42:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_4c77a98309b18630c0618c89c97ba0e0 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:42:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_dc9c085778d516278c0cadca4db4be15 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:42:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_5b2e140dcb360bbba091b9c24633aec7 > >Result: Logical volume "brick_5b2e140dcb360bbba091b9c24633aec7" successfully removed >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 110.57µs >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 108.378µs >[kubeexec] DEBUG 2018/06/08 08:42:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e5ae8fc12a741a851b2aade547fe3615 > >Result: Logical volume "brick_e5ae8fc12a741a851b2aade547fe3615" successfully removed >[kubeexec] DEBUG 2018/06/08 08:42:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_d7770ed15229b256bf58c081ad57ea73 > >Result: Logical volume "brick_d7770ed15229b256bf58c081ad57ea73" successfully removed >[kubeexec] DEBUG 2018/06/08 08:42:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_bfb72d4e8e105df407a890930c475175 > >Result: Logical volume "brick_bfb72d4e8e105df407a890930c475175" successfully removed >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 208.668µs >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 127.372µs >[kubeexec] DEBUG 2018/06/08 08:42:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_4c77a98309b18630c0618c89c97ba0e0 > >Result: Logical volume "tp_4c77a98309b18630c0618c89c97ba0e0" successfully removed >[kubeexec] DEBUG 2018/06/08 08:42:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_dc9c085778d516278c0cadca4db4be15 > >Result: Logical volume "tp_dc9c085778d516278c0cadca4db4be15" successfully removed >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 192.448µs >[kubeexec] DEBUG 2018/06/08 08:42:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_5b2e140dcb360bbba091b9c24633aec7 > >Result: 0 >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 164.198µs >[kubeexec] DEBUG 2018/06/08 08:42:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_e5ae8fc12a741a851b2aade547fe3615 > >Result: 0 >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 174.553µs >[kubeexec] DEBUG 2018/06/08 08:42:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_d7770ed15229b256bf58c081ad57ea73 > >Result: 0 >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 116.448µs >[kubeexec] DEBUG 2018/06/08 08:42:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_bfb72d4e8e105df407a890930c475175 > >Result: 0 >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 116.179µs >[kubeexec] DEBUG 2018/06/08 08:42:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4c77a98309b18630c0618c89c97ba0e0 >Result: >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 111.506µs >[kubeexec] DEBUG 2018/06/08 08:42:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dc9c085778d516278c0cadca4db4be15 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_5b2e140dcb360bbba091b9c24633aec7 > >Result: Logical volume "tp_5b2e140dcb360bbba091b9c24633aec7" successfully removed >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 162.504µs >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 182.842µs >[kubeexec] DEBUG 2018/06/08 08:42:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_e5ae8fc12a741a851b2aade547fe3615 > >Result: Logical volume "tp_e5ae8fc12a741a851b2aade547fe3615" successfully removed >[kubeexec] DEBUG 2018/06/08 08:42:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_d7770ed15229b256bf58c081ad57ea73 > >Result: Logical volume "tp_d7770ed15229b256bf58c081ad57ea73" successfully removed >[kubeexec] DEBUG 2018/06/08 08:42:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_bfb72d4e8e105df407a890930c475175 > >Result: Logical volume "tp_bfb72d4e8e105df407a890930c475175" successfully removed >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 200 OK in 172.232µs >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 175.795µs >[kubeexec] DEBUG 2018/06/08 08:42:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e5ae8fc12a741a851b2aade547fe3615 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_d7770ed15229b256bf58c081ad57ea73 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5b2e140dcb360bbba091b9c24633aec7 >Result: >[heketi] INFO 2018/06/08 08:42:49 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:42:49 asynchttp.go:292: Completed job 84472ae559c45c9b84ab85d4a5852f91 in 23.865479238s >[negroni] Started GET /queue/84472ae559c45c9b84ab85d4a5852f91 >[negroni] Completed 204 No Content in 164.556µs >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 200 OK in 106.769µs >[kubeexec] DEBUG 2018/06/08 08:42:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bfb72d4e8e105df407a890930c475175 >Result: >[heketi] INFO 2018/06/08 08:42:50 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:42:50 asynchttp.go:292: Completed job f1e044bedf7f37fb57361f115a62841b in 24.052834729s >[negroni] Started GET /queue/f1e044bedf7f37fb57361f115a62841b >[negroni] Completed 204 No Content in 157.203µs >[negroni] Started DELETE /volumes/7851878e4341c93e517bff5cad8f7b49 >[negroni] Completed 404 Not Found in 3.675742ms >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 176.799µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 3.00093ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.158224ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.668917ms >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 184.905µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 705.014µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.189942ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.088299ms >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:42:52 Allocating brick set #0 >[negroni] Completed 202 Accepted in 13.881185ms >[asynchttp] INFO 2018/06/08 08:42:52 asynchttp.go:288: Started job 44168b106c812de9f08e3a8754d679af >[heketi] INFO 2018/06/08 08:42:52 Started async operation: Create Volume >[negroni] Started GET /queue/44168b106c812de9f08e3a8754d679af >[negroni] Completed 200 OK in 88.113µs >[heketi] INFO 2018/06/08 08:42:52 Creating brick a48060da658afcf0bd672a1d5e325a35 >[heketi] INFO 2018/06/08 08:42:52 Creating brick eab93f37cd264f7b8d5185da3d5e1ff2 >[heketi] INFO 2018/06/08 08:42:52 Creating brick 6041796df9e6e8dde7acbc28fb15eebd >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:42:52 Allocating brick set #0 >[negroni] Completed 202 Accepted in 17.146336ms >[asynchttp] INFO 2018/06/08 08:42:52 asynchttp.go:288: Started job f4bd066349a28138ceafa752530e031d >[heketi] INFO 2018/06/08 08:42:52 Started async operation: Create Volume >[negroni] Started GET /queue/f4bd066349a28138ceafa752530e031d >[negroni] Completed 200 OK in 110.896µs >[heketi] INFO 2018/06/08 08:42:52 Creating brick 9b2ad28a2bba6499ea37c90a18a2f09e >[heketi] INFO 2018/06/08 08:42:52 Creating brick 92deb78ca12e07cbb3500736c89f75fb >[heketi] INFO 2018/06/08 08:42:52 Creating brick 69c43b1a8d9e0514a7e21bde6a2b8a94 >[kubeexec] DEBUG 2018/06/08 08:42:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_a48060da658afcf0bd672a1d5e325a35 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6041796df9e6e8dde7acbc28fb15eebd >Result: >[kubeexec] DEBUG 2018/06/08 08:42:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_eab93f37cd264f7b8d5185da3d5e1ff2 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_eab93f37cd264f7b8d5185da3d5e1ff2 --virtualsize 2097152K --name brick_eab93f37cd264f7b8d5185da3d5e1ff2 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_eab93f37cd264f7b8d5185da3d5e1ff2" created. >[kubeexec] DEBUG 2018/06/08 08:42:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_a48060da658afcf0bd672a1d5e325a35 --virtualsize 2097152K --name brick_a48060da658afcf0bd672a1d5e325a35 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_a48060da658afcf0bd672a1d5e325a35" created. >[kubeexec] DEBUG 2018/06/08 08:42:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_6041796df9e6e8dde7acbc28fb15eebd --virtualsize 2097152K --name brick_6041796df9e6e8dde7acbc28fb15eebd >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_6041796df9e6e8dde7acbc28fb15eebd" created. >[kubeexec] DEBUG 2018/06/08 08:42:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_a48060da658afcf0bd672a1d5e325a35 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_a48060da658afcf0bd672a1d5e325a35 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:42:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_6041796df9e6e8dde7acbc28fb15eebd >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_6041796df9e6e8dde7acbc28fb15eebd isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:42:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_eab93f37cd264f7b8d5185da3d5e1ff2 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_eab93f37cd264f7b8d5185da3d5e1ff2 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:42:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_a48060da658afcf0bd672a1d5e325a35 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_a48060da658afcf0bd672a1d5e325a35 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:42:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_6041796df9e6e8dde7acbc28fb15eebd /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6041796df9e6e8dde7acbc28fb15eebd xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:42:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_eab93f37cd264f7b8d5185da3d5e1ff2 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_eab93f37cd264f7b8d5185da3d5e1ff2 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_eab93f37cd264f7b8d5185da3d5e1ff2 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_eab93f37cd264f7b8d5185da3d5e1ff2 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_a48060da658afcf0bd672a1d5e325a35 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_a48060da658afcf0bd672a1d5e325a35 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_6041796df9e6e8dde7acbc28fb15eebd /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6041796df9e6e8dde7acbc28fb15eebd >Result: >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_eab93f37cd264f7b8d5185da3d5e1ff2/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_a48060da658afcf0bd672a1d5e325a35/brick >Result: >[negroni] Started GET /queue/44168b106c812de9f08e3a8754d679af >[negroni] Completed 200 OK in 140.028µs >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6041796df9e6e8dde7acbc28fb15eebd/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2000 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_a48060da658afcf0bd672a1d5e325a35/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2000 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_eab93f37cd264f7b8d5185da3d5e1ff2/brick >Result: >[negroni] Started GET /queue/f4bd066349a28138ceafa752530e031d >[negroni] Completed 200 OK in 93.878µs >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2000 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6041796df9e6e8dde7acbc28fb15eebd/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_a48060da658afcf0bd672a1d5e325a35/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_eab93f37cd264f7b8d5185da3d5e1ff2/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6041796df9e6e8dde7acbc28fb15eebd/brick >Result: >[cmdexec] INFO 2018/06/08 08:42:53 Creating volume vol_519804da0a04715f5fe88eef3554e109 replica 3 >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_69c43b1a8d9e0514a7e21bde6a2b8a94 >Result: >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_92deb78ca12e07cbb3500736c89f75fb >Result: >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_9b2ad28a2bba6499ea37c90a18a2f09e >Result: >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_69c43b1a8d9e0514a7e21bde6a2b8a94 --virtualsize 2097152K --name brick_69c43b1a8d9e0514a7e21bde6a2b8a94 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_69c43b1a8d9e0514a7e21bde6a2b8a94" created. >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_92deb78ca12e07cbb3500736c89f75fb --virtualsize 2097152K --name brick_92deb78ca12e07cbb3500736c89f75fb >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_92deb78ca12e07cbb3500736c89f75fb" created. >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_9b2ad28a2bba6499ea37c90a18a2f09e --virtualsize 2097152K --name brick_9b2ad28a2bba6499ea37c90a18a2f09e >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_9b2ad28a2bba6499ea37c90a18a2f09e" created. >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_69c43b1a8d9e0514a7e21bde6a2b8a94 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_69c43b1a8d9e0514a7e21bde6a2b8a94 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_92deb78ca12e07cbb3500736c89f75fb >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_92deb78ca12e07cbb3500736c89f75fb isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_9b2ad28a2bba6499ea37c90a18a2f09e >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_9b2ad28a2bba6499ea37c90a18a2f09e isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_92deb78ca12e07cbb3500736c89f75fb /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_92deb78ca12e07cbb3500736c89f75fb xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_69c43b1a8d9e0514a7e21bde6a2b8a94 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_69c43b1a8d9e0514a7e21bde6a2b8a94 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:42:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_9b2ad28a2bba6499ea37c90a18a2f09e /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_9b2ad28a2bba6499ea37c90a18a2f09e xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:42:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_92deb78ca12e07cbb3500736c89f75fb /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_92deb78ca12e07cbb3500736c89f75fb >Result: >[kubeexec] DEBUG 2018/06/08 08:42:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_69c43b1a8d9e0514a7e21bde6a2b8a94 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_69c43b1a8d9e0514a7e21bde6a2b8a94 >Result: >[negroni] Started GET /queue/44168b106c812de9f08e3a8754d679af >[negroni] Completed 200 OK in 104.967µs >[kubeexec] DEBUG 2018/06/08 08:42:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_9b2ad28a2bba6499ea37c90a18a2f09e /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_9b2ad28a2bba6499ea37c90a18a2f09e >Result: >[kubeexec] DEBUG 2018/06/08 08:42:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_92deb78ca12e07cbb3500736c89f75fb/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_69c43b1a8d9e0514a7e21bde6a2b8a94/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_9b2ad28a2bba6499ea37c90a18a2f09e/brick >Result: >[negroni] Started GET /queue/f4bd066349a28138ceafa752530e031d >[negroni] Completed 200 OK in 85.8µs >[kubeexec] DEBUG 2018/06/08 08:42:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2001 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_92deb78ca12e07cbb3500736c89f75fb/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2001 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_69c43b1a8d9e0514a7e21bde6a2b8a94/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2001 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_9b2ad28a2bba6499ea37c90a18a2f09e/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_69c43b1a8d9e0514a7e21bde6a2b8a94/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_92deb78ca12e07cbb3500736c89f75fb/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:42:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_9b2ad28a2bba6499ea37c90a18a2f09e/brick >Result: >[cmdexec] INFO 2018/06/08 08:42:54 Creating volume vol_34180b600b060c8400f7902d50d19fce replica 3 >[kubeexec] DEBUG 2018/06/08 08:42:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_34180b600b060c8400f7902d50d19fce replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_92deb78ca12e07cbb3500736c89f75fb/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_9b2ad28a2bba6499ea37c90a18a2f09e/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_69c43b1a8d9e0514a7e21bde6a2b8a94/brick >Result: volume create: vol_34180b600b060c8400f7902d50d19fce: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:42:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_519804da0a04715f5fe88eef3554e109 replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_a48060da658afcf0bd672a1d5e325a35/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_eab93f37cd264f7b8d5185da3d5e1ff2/brick 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6041796df9e6e8dde7acbc28fb15eebd/brick >Result: volume create: vol_519804da0a04715f5fe88eef3554e109: success: please start the volume to access data >[negroni] Started GET /queue/44168b106c812de9f08e3a8754d679af >[negroni] Completed 200 OK in 338.092µs >[negroni] Started GET /queue/f4bd066349a28138ceafa752530e031d >[negroni] Completed 200 OK in 150.507µs >[negroni] Started GET /queue/44168b106c812de9f08e3a8754d679af >[negroni] Completed 200 OK in 109.564µs >[negroni] Started GET /queue/f4bd066349a28138ceafa752530e031d >[negroni] Completed 200 OK in 116.247µs >[negroni] Started GET /queue/44168b106c812de9f08e3a8754d679af >[negroni] Completed 200 OK in 121.895µs >[negroni] Started GET /queue/f4bd066349a28138ceafa752530e031d >[negroni] Completed 200 OK in 118.864µs >[negroni] Started GET /queue/44168b106c812de9f08e3a8754d679af >[negroni] Completed 200 OK in 109.534µs >[negroni] Started GET /queue/f4bd066349a28138ceafa752530e031d >[negroni] Completed 200 OK in 118.054µs >[negroni] Started GET /queue/44168b106c812de9f08e3a8754d679af >[negroni] Completed 200 OK in 171.078µs >[negroni] Started GET /queue/f4bd066349a28138ceafa752530e031d >[negroni] Completed 200 OK in 170.979µs >[negroni] Started GET /queue/44168b106c812de9f08e3a8754d679af >[negroni] Completed 200 OK in 188.273µs >[negroni] Started GET /queue/f4bd066349a28138ceafa752530e031d >[negroni] Completed 200 OK in 231.622µs >[negroni] Started GET /queue/44168b106c812de9f08e3a8754d679af >[negroni] Completed 200 OK in 144.627µs >[negroni] Started GET /queue/f4bd066349a28138ceafa752530e031d >[negroni] Completed 200 OK in 169.804µs >[kubeexec] DEBUG 2018/06/08 08:43:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_34180b600b060c8400f7902d50d19fce >Result: volume start: vol_34180b600b060c8400f7902d50d19fce: success >[kubeexec] DEBUG 2018/06/08 08:43:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_519804da0a04715f5fe88eef3554e109 >Result: volume start: vol_519804da0a04715f5fe88eef3554e109: success >[heketi] INFO 2018/06/08 08:43:02 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:43:02 asynchttp.go:292: Completed job f4bd066349a28138ceafa752530e031d in 9.792582551s >[heketi] INFO 2018/06/08 08:43:02 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:43:02 asynchttp.go:292: Completed job 44168b106c812de9f08e3a8754d679af in 9.90405144s >[negroni] Started GET /queue/44168b106c812de9f08e3a8754d679af >[negroni] Completed 303 See Other in 206.983µs >[negroni] Started GET /volumes/519804da0a04715f5fe88eef3554e109 >[negroni] Completed 200 OK in 7.06093ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 201.816µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 4.061402ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 2.282748ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 1.356823ms >[negroni] Started GET /queue/f4bd066349a28138ceafa752530e031d >[negroni] Completed 303 See Other in 181.552µs >[negroni] Started GET /volumes/34180b600b060c8400f7902d50d19fce >[negroni] Completed 200 OK in 784.081µs >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 469.804µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 1.131032ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 794.968µs >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 736.634µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.028392ms >[negroni] Started GET /volumes/34180b600b060c8400f7902d50d19fce >[negroni] Completed 200 OK in 1.130505ms >[negroni] Started GET /volumes/519804da0a04715f5fe88eef3554e109 >[negroni] Completed 200 OK in 1.029772ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 623.296µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 999.173µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.75559ms >[negroni] Started DELETE /volumes/34180b600b060c8400f7902d50d19fce >[negroni] Completed 202 Accepted in 9.569326ms >[asynchttp] INFO 2018/06/08 08:43:04 asynchttp.go:288: Started job 346c61e53782b5e0c258f3a62595da0d >[heketi] INFO 2018/06/08 08:43:04 Started async operation: Delete Volume >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 159.842µs >[negroni] Started DELETE /volumes/519804da0a04715f5fe88eef3554e109 >[negroni] Completed 202 Accepted in 11.791578ms >[asynchttp] INFO 2018/06/08 08:43:04 asynchttp.go:288: Started job 68f1753cbf11e3a7962681eb39bdfd38 >[heketi] INFO 2018/06/08 08:43:04 Started async operation: Delete Volume >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 102.088µs >[kubeexec] DEBUG 2018/06/08 08:43:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script snapshot list vol_34180b600b060c8400f7902d50d19fce --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[kubeexec] DEBUG 2018/06/08 08:43:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_519804da0a04715f5fe88eef3554e109 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 195.879µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 128.573µs >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 136.535µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 165.739µs >[kubeexec] DEBUG 2018/06/08 08:43:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume stop vol_34180b600b060c8400f7902d50d19fce force >Result: volume stop: vol_34180b600b060c8400f7902d50d19fce: success >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 142.766µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 211.655µs >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 245.952µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 183.233µs >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 176.933µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 156.674µs >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 124.654µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 123.546µs >[kubeexec] DEBUG 2018/06/08 08:43:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume stop vol_519804da0a04715f5fe88eef3554e109 force >Result: volume stop: vol_519804da0a04715f5fe88eef3554e109: success >[kubeexec] DEBUG 2018/06/08 08:43:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume delete vol_34180b600b060c8400f7902d50d19fce >Result: volume delete: vol_34180b600b060c8400f7902d50d19fce: success >[heketi] INFO 2018/06/08 08:43:10 Deleting brick 69c43b1a8d9e0514a7e21bde6a2b8a94 >[heketi] INFO 2018/06/08 08:43:10 Deleting brick 92deb78ca12e07cbb3500736c89f75fb >[heketi] INFO 2018/06/08 08:43:10 Deleting brick 9b2ad28a2bba6499ea37c90a18a2f09e >[kubeexec] DEBUG 2018/06/08 08:43:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_69c43b1a8d9e0514a7e21bde6a2b8a94 | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_69c43b1a8d9e0514a7e21bde6a2b8a94 >[kubeexec] DEBUG 2018/06/08 08:43:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_9b2ad28a2bba6499ea37c90a18a2f09e | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_9b2ad28a2bba6499ea37c90a18a2f09e >[kubeexec] DEBUG 2018/06/08 08:43:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume delete vol_519804da0a04715f5fe88eef3554e109 >Result: volume delete: vol_519804da0a04715f5fe88eef3554e109: success >[heketi] INFO 2018/06/08 08:43:11 Deleting brick a48060da658afcf0bd672a1d5e325a35 >[heketi] INFO 2018/06/08 08:43:11 Deleting brick eab93f37cd264f7b8d5185da3d5e1ff2 >[heketi] INFO 2018/06/08 08:43:11 Deleting brick 6041796df9e6e8dde7acbc28fb15eebd >[kubeexec] DEBUG 2018/06/08 08:43:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_69c43b1a8d9e0514a7e21bde6a2b8a94 > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_69c43b1a8d9e0514a7e21bde6a2b8a94 >[kubeexec] DEBUG 2018/06/08 08:43:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_9b2ad28a2bba6499ea37c90a18a2f09e > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_9b2ad28a2bba6499ea37c90a18a2f09e >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 197.323µs >[kubeexec] DEBUG 2018/06/08 08:43:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_92deb78ca12e07cbb3500736c89f75fb | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_92deb78ca12e07cbb3500736c89f75fb >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 217.161µs >[kubeexec] DEBUG 2018/06/08 08:43:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_eab93f37cd264f7b8d5185da3d5e1ff2 | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_eab93f37cd264f7b8d5185da3d5e1ff2 >[kubeexec] DEBUG 2018/06/08 08:43:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_a48060da658afcf0bd672a1d5e325a35 | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_a48060da658afcf0bd672a1d5e325a35 >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 140.577µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 188.552µs >[kubeexec] DEBUG 2018/06/08 08:43:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6041796df9e6e8dde7acbc28fb15eebd | cut -d" " -f1 >Result: /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_6041796df9e6e8dde7acbc28fb15eebd >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 147.798µs >[kubeexec] DEBUG 2018/06/08 08:43:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_69c43b1a8d9e0514a7e21bde6a2b8a94 >Result: >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 110.875µs >[kubeexec] DEBUG 2018/06/08 08:43:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_9b2ad28a2bba6499ea37c90a18a2f09e >Result: >[kubeexec] DEBUG 2018/06/08 08:43:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_92deb78ca12e07cbb3500736c89f75fb > >Result: vg_3a4297677881963e3f80124971d50eea/tp_92deb78ca12e07cbb3500736c89f75fb >[heketi] INFO 2018/06/08 08:43:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:43:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 189.432µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 119.201µs >[kubeexec] DEBUG 2018/06/08 08:43:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_eab93f37cd264f7b8d5185da3d5e1ff2 > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_eab93f37cd264f7b8d5185da3d5e1ff2 >[kubeexec] DEBUG 2018/06/08 08:43:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_a48060da658afcf0bd672a1d5e325a35 > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_a48060da658afcf0bd672a1d5e325a35 >[kubeexec] DEBUG 2018/06/08 08:43:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_6041796df9e6e8dde7acbc28fb15eebd > >Result: vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_6041796df9e6e8dde7acbc28fb15eebd >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 247.618µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 139.058µs >[kubeexec] DEBUG 2018/06/08 08:43:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_69c43b1a8d9e0514a7e21bde6a2b8a94/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:43:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_9b2ad28a2bba6499ea37c90a18a2f09e/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:43:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_92deb78ca12e07cbb3500736c89f75fb >Result: >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 127.792µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 191.007µs >[kubeexec] DEBUG 2018/06/08 08:43:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 19min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13429 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:43:16 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:43:16 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:43:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_a48060da658afcf0bd672a1d5e325a35 >Result: >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 120.942µs >[kubeexec] DEBUG 2018/06/08 08:43:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6041796df9e6e8dde7acbc28fb15eebd >Result: >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 240.088µs >[kubeexec] DEBUG 2018/06/08 08:43:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_eab93f37cd264f7b8d5185da3d5e1ff2 >Result: >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 119.052µs >[kubeexec] DEBUG 2018/06/08 08:43:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_9b2ad28a2bba6499ea37c90a18a2f09e > >Result: Logical volume "brick_9b2ad28a2bba6499ea37c90a18a2f09e" successfully removed >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 124.418µs >[kubeexec] DEBUG 2018/06/08 08:43:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_92deb78ca12e07cbb3500736c89f75fb/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 129.859µs >[kubeexec] DEBUG 2018/06/08 08:43:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_69c43b1a8d9e0514a7e21bde6a2b8a94 > >Result: Logical volume "brick_69c43b1a8d9e0514a7e21bde6a2b8a94" successfully removed >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 120.116µs >[kubeexec] DEBUG 2018/06/08 08:43:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_a48060da658afcf0bd672a1d5e325a35/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:43:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 19min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13712 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:43:19 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:43:19 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 219.385µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 175.292µs >[kubeexec] DEBUG 2018/06/08 08:43:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_eab93f37cd264f7b8d5185da3d5e1ff2/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:43:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_9b2ad28a2bba6499ea37c90a18a2f09e > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:43:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_6041796df9e6e8dde7acbc28fb15eebd/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 152.352µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 127.716µs >[kubeexec] DEBUG 2018/06/08 08:43:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_69c43b1a8d9e0514a7e21bde6a2b8a94 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:43:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_a48060da658afcf0bd672a1d5e325a35 > >Result: Logical volume "brick_a48060da658afcf0bd672a1d5e325a35" successfully removed >[kubeexec] DEBUG 2018/06/08 08:43:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_92deb78ca12e07cbb3500736c89f75fb > >Result: Logical volume "brick_92deb78ca12e07cbb3500736c89f75fb" successfully removed >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 103.685µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 118.673µs >[kubeexec] DEBUG 2018/06/08 08:43:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_eab93f37cd264f7b8d5185da3d5e1ff2 > >Result: Logical volume "brick_eab93f37cd264f7b8d5185da3d5e1ff2" successfully removed >[kubeexec] DEBUG 2018/06/08 08:43:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 18min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ11949 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:43:23 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:43:23 Cleaned 0 nodes from health cache >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 172.101µs >[kubeexec] DEBUG 2018/06/08 08:43:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_6041796df9e6e8dde7acbc28fb15eebd > >Result: Logical volume "brick_6041796df9e6e8dde7acbc28fb15eebd" successfully removed >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 106.554µs >[kubeexec] DEBUG 2018/06/08 08:43:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_69c43b1a8d9e0514a7e21bde6a2b8a94 > >Result: Logical volume "tp_69c43b1a8d9e0514a7e21bde6a2b8a94" successfully removed >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 121.657µs >[kubeexec] DEBUG 2018/06/08 08:43:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_9b2ad28a2bba6499ea37c90a18a2f09e > >Result: Logical volume "tp_9b2ad28a2bba6499ea37c90a18a2f09e" successfully removed >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 124.879µs >[kubeexec] DEBUG 2018/06/08 08:43:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_92deb78ca12e07cbb3500736c89f75fb > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:43:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_eab93f37cd264f7b8d5185da3d5e1ff2 > >Result: 0 >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 92.284µs >[kubeexec] DEBUG 2018/06/08 08:43:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_a48060da658afcf0bd672a1d5e325a35 > >Result: 0 >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 89.274µs >[kubeexec] DEBUG 2018/06/08 08:43:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_6041796df9e6e8dde7acbc28fb15eebd > >Result: 0 >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 151.005µs >[kubeexec] DEBUG 2018/06/08 08:43:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_69c43b1a8d9e0514a7e21bde6a2b8a94 >Result: >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 96.476µs >[kubeexec] DEBUG 2018/06/08 08:43:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_9b2ad28a2bba6499ea37c90a18a2f09e >Result: >[kubeexec] DEBUG 2018/06/08 08:43:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_92deb78ca12e07cbb3500736c89f75fb > >Result: Logical volume "tp_92deb78ca12e07cbb3500736c89f75fb" successfully removed >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 191.883µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 180.066µs >[kubeexec] DEBUG 2018/06/08 08:43:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_eab93f37cd264f7b8d5185da3d5e1ff2 > >Result: Logical volume "tp_eab93f37cd264f7b8d5185da3d5e1ff2" successfully removed >[kubeexec] DEBUG 2018/06/08 08:43:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_a48060da658afcf0bd672a1d5e325a35 > >Result: Logical volume "tp_a48060da658afcf0bd672a1d5e325a35" successfully removed >[kubeexec] DEBUG 2018/06/08 08:43:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_6041796df9e6e8dde7acbc28fb15eebd > >Result: Logical volume "tp_6041796df9e6e8dde7acbc28fb15eebd" successfully removed >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 200 OK in 123.198µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 122.281µs >[kubeexec] DEBUG 2018/06/08 08:43:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_eab93f37cd264f7b8d5185da3d5e1ff2 >Result: >[kubeexec] DEBUG 2018/06/08 08:43:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_a48060da658afcf0bd672a1d5e325a35 >Result: >[kubeexec] DEBUG 2018/06/08 08:43:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_92deb78ca12e07cbb3500736c89f75fb >Result: >[heketi] INFO 2018/06/08 08:43:29 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:43:29 asynchttp.go:292: Completed job 346c61e53782b5e0c258f3a62595da0d in 25.038088296s >[negroni] Started GET /queue/346c61e53782b5e0c258f3a62595da0d >[negroni] Completed 204 No Content in 142.118µs >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 200 OK in 168.595µs >[kubeexec] DEBUG 2018/06/08 08:43:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_6041796df9e6e8dde7acbc28fb15eebd >Result: >[heketi] INFO 2018/06/08 08:43:29 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:43:29 asynchttp.go:292: Completed job 68f1753cbf11e3a7962681eb39bdfd38 in 25.251210925s >[negroni] Started GET /queue/68f1753cbf11e3a7962681eb39bdfd38 >[negroni] Completed 204 No Content in 132.801µs >[negroni] Started DELETE /volumes/519804da0a04715f5fe88eef3554e109 >[negroni] Completed 404 Not Found in 4.07577ms >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 234.893µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 3.082636ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.193719ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.159411ms >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 200.224µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 651.133µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 570.331µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 529.642µs >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:43:31 Allocating brick set #0 >[negroni] Completed 202 Accepted in 14.708355ms >[asynchttp] INFO 2018/06/08 08:43:31 asynchttp.go:288: Started job 4b8e4d46de1924a143cd62855eaeadee >[heketi] INFO 2018/06/08 08:43:31 Started async operation: Create Volume >[negroni] Started GET /queue/4b8e4d46de1924a143cd62855eaeadee >[negroni] Completed 200 OK in 92.876µs >[heketi] INFO 2018/06/08 08:43:31 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:43:31 Creating brick f05028077b974ebc1f6621aee2184169 >[heketi] INFO 2018/06/08 08:43:31 Creating brick 214eb9006f9103530a1d0310d5f5dcfc >[heketi] INFO 2018/06/08 08:43:31 Creating brick 4f4d753d298c99eac492c32006c74484 >[negroni] Completed 202 Accepted in 30.237674ms >[asynchttp] INFO 2018/06/08 08:43:31 asynchttp.go:288: Started job c2abd38f6c55bb468ebcfd89d453fdc9 >[heketi] INFO 2018/06/08 08:43:31 Started async operation: Create Volume >[negroni] Started GET /queue/c2abd38f6c55bb468ebcfd89d453fdc9 >[negroni] Completed 200 OK in 82.798µs >[heketi] INFO 2018/06/08 08:43:31 Creating brick b38349024b5350e969179b72b5c2af7c >[heketi] INFO 2018/06/08 08:43:31 Creating brick 9775522b5bb908520213f17390c26d53 >[heketi] INFO 2018/06/08 08:43:31 Creating brick bf9bfa3d8464d1e5476516d891583f10 >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:43:31 Allocating brick set #0 >[negroni] Completed 202 Accepted in 18.878321ms >[asynchttp] INFO 2018/06/08 08:43:32 asynchttp.go:288: Started job 8b3ba83547bd02c5fa03488e1de595ea >[heketi] INFO 2018/06/08 08:43:32 Started async operation: Create Volume >[negroni] Started GET /queue/8b3ba83547bd02c5fa03488e1de595ea >[negroni] Completed 200 OK in 105.077µs >[heketi] INFO 2018/06/08 08:43:32 Creating brick 00c2d7c5fda2de77f930386434235209 >[heketi] INFO 2018/06/08 08:43:32 Creating brick 0fb75c3a027f88c53839c0f9578a1801 >[heketi] INFO 2018/06/08 08:43:32 Creating brick 3fce0ef044cb1f17096a5bf85437e0db >[negroni] Started POST /volumes >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_214eb9006f9103530a1d0310d5f5dcfc >Result: >[heketi] INFO 2018/06/08 08:43:32 Allocating brick set #0 >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_f05028077b974ebc1f6621aee2184169 >Result: >[negroni] Completed 202 Accepted in 19.240323ms >[asynchttp] INFO 2018/06/08 08:43:32 asynchttp.go:288: Started job 5aa78dda299ed08cbb39d86264fe4af8 >[heketi] INFO 2018/06/08 08:43:32 Started async operation: Create Volume >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 99.175µs >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4f4d753d298c99eac492c32006c74484 >Result: >[heketi] INFO 2018/06/08 08:43:32 Creating brick a40c93ecae170d4b0e28378a7bf7aaee >[heketi] INFO 2018/06/08 08:43:32 Creating brick 7be0b70e69a87b262dc5f895d408cc1c >[heketi] INFO 2018/06/08 08:43:32 Creating brick 93847c4b6b28d7ed761f42d934f90771 >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_f05028077b974ebc1f6621aee2184169 --virtualsize 2097152K --name brick_f05028077b974ebc1f6621aee2184169 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_f05028077b974ebc1f6621aee2184169" created. >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_214eb9006f9103530a1d0310d5f5dcfc --virtualsize 2097152K --name brick_214eb9006f9103530a1d0310d5f5dcfc >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_214eb9006f9103530a1d0310d5f5dcfc" created. >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_4f4d753d298c99eac492c32006c74484 --virtualsize 2097152K --name brick_4f4d753d298c99eac492c32006c74484 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_4f4d753d298c99eac492c32006c74484" created. >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_f05028077b974ebc1f6621aee2184169 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_f05028077b974ebc1f6621aee2184169 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4f4d753d298c99eac492c32006c74484 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4f4d753d298c99eac492c32006c74484 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_214eb9006f9103530a1d0310d5f5dcfc >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_214eb9006f9103530a1d0310d5f5dcfc isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_f05028077b974ebc1f6621aee2184169 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_f05028077b974ebc1f6621aee2184169 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4f4d753d298c99eac492c32006c74484 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4f4d753d298c99eac492c32006c74484 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_214eb9006f9103530a1d0310d5f5dcfc /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_214eb9006f9103530a1d0310d5f5dcfc xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_f05028077b974ebc1f6621aee2184169 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_f05028077b974ebc1f6621aee2184169 >Result: >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_214eb9006f9103530a1d0310d5f5dcfc /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_214eb9006f9103530a1d0310d5f5dcfc >Result: >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4f4d753d298c99eac492c32006c74484 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4f4d753d298c99eac492c32006c74484 >Result: >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_f05028077b974ebc1f6621aee2184169/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_214eb9006f9103530a1d0310d5f5dcfc/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4f4d753d298c99eac492c32006c74484/brick >Result: >[negroni] Started GET /queue/4b8e4d46de1924a143cd62855eaeadee >[negroni] Completed 200 OK in 135.103µs >[negroni] Started GET /queue/c2abd38f6c55bb468ebcfd89d453fdc9 >[negroni] Completed 200 OK in 166.332µs >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2000 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_214eb9006f9103530a1d0310d5f5dcfc/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2000 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_f05028077b974ebc1f6621aee2184169/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2000 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4f4d753d298c99eac492c32006c74484/brick >Result: >[negroni] Started GET /queue/8b3ba83547bd02c5fa03488e1de595ea >[negroni] Completed 200 OK in 96.794µs >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 116.049µs >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_f05028077b974ebc1f6621aee2184169/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_214eb9006f9103530a1d0310d5f5dcfc/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4f4d753d298c99eac492c32006c74484/brick >Result: >[cmdexec] INFO 2018/06/08 08:43:33 Creating volume vol_5db55c483f6984d9917cfe4b3c8b3cbc replica 3 >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9775522b5bb908520213f17390c26d53 >Result: >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bf9bfa3d8464d1e5476516d891583f10 >Result: >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b38349024b5350e969179b72b5c2af7c >Result: >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_9775522b5bb908520213f17390c26d53 --virtualsize 2097152K --name brick_9775522b5bb908520213f17390c26d53 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_9775522b5bb908520213f17390c26d53" created. >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_bf9bfa3d8464d1e5476516d891583f10 --virtualsize 2097152K --name brick_bf9bfa3d8464d1e5476516d891583f10 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_bf9bfa3d8464d1e5476516d891583f10" created. >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_b38349024b5350e969179b72b5c2af7c --virtualsize 2097152K --name brick_b38349024b5350e969179b72b5c2af7c >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_b38349024b5350e969179b72b5c2af7c" created. >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_9775522b5bb908520213f17390c26d53 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_9775522b5bb908520213f17390c26d53 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_b38349024b5350e969179b72b5c2af7c >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_b38349024b5350e969179b72b5c2af7c isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_9775522b5bb908520213f17390c26d53 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9775522b5bb908520213f17390c26d53 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_bf9bfa3d8464d1e5476516d891583f10 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_bf9bfa3d8464d1e5476516d891583f10 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_b38349024b5350e969179b72b5c2af7c /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b38349024b5350e969179b72b5c2af7c xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_9775522b5bb908520213f17390c26d53 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9775522b5bb908520213f17390c26d53 >Result: >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_bf9bfa3d8464d1e5476516d891583f10 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bf9bfa3d8464d1e5476516d891583f10 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_b38349024b5350e969179b72b5c2af7c /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b38349024b5350e969179b72b5c2af7c >Result: >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9775522b5bb908520213f17390c26d53/brick >Result: >[negroni] Started GET /queue/4b8e4d46de1924a143cd62855eaeadee >[negroni] Completed 200 OK in 112.388µs >[negroni] Started GET /queue/c2abd38f6c55bb468ebcfd89d453fdc9 >[negroni] Completed 200 OK in 171.584µs >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_bf9bfa3d8464d1e5476516d891583f10 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bf9bfa3d8464d1e5476516d891583f10 >Result: >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b38349024b5350e969179b72b5c2af7c/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2001 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9775522b5bb908520213f17390c26d53/brick >Result: >[negroni] Started GET /queue/8b3ba83547bd02c5fa03488e1de595ea >[negroni] Completed 200 OK in 110.21µs >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bf9bfa3d8464d1e5476516d891583f10/brick >Result: >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 96.528µs >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9775522b5bb908520213f17390c26d53/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2001 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b38349024b5350e969179b72b5c2af7c/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2001 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bf9bfa3d8464d1e5476516d891583f10/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b38349024b5350e969179b72b5c2af7c/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_3fce0ef044cb1f17096a5bf85437e0db >Result: >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bf9bfa3d8464d1e5476516d891583f10/brick >Result: >[cmdexec] INFO 2018/06/08 08:43:34 Creating volume vol_a4a6e4892da299f6c5634b8a2def697e replica 3 >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0fb75c3a027f88c53839c0f9578a1801 >Result: >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_00c2d7c5fda2de77f930386434235209 >Result: >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_3fce0ef044cb1f17096a5bf85437e0db --virtualsize 2097152K --name brick_3fce0ef044cb1f17096a5bf85437e0db >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_3fce0ef044cb1f17096a5bf85437e0db" created. >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_0fb75c3a027f88c53839c0f9578a1801 --virtualsize 2097152K --name brick_0fb75c3a027f88c53839c0f9578a1801 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_0fb75c3a027f88c53839c0f9578a1801" created. >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_3fce0ef044cb1f17096a5bf85437e0db >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_3fce0ef044cb1f17096a5bf85437e0db isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_00c2d7c5fda2de77f930386434235209 --virtualsize 2097152K --name brick_00c2d7c5fda2de77f930386434235209 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_00c2d7c5fda2de77f930386434235209" created. >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_3fce0ef044cb1f17096a5bf85437e0db /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_3fce0ef044cb1f17096a5bf85437e0db xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_0fb75c3a027f88c53839c0f9578a1801 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_0fb75c3a027f88c53839c0f9578a1801 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_00c2d7c5fda2de77f930386434235209 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_00c2d7c5fda2de77f930386434235209 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_0fb75c3a027f88c53839c0f9578a1801 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0fb75c3a027f88c53839c0f9578a1801 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_00c2d7c5fda2de77f930386434235209 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_00c2d7c5fda2de77f930386434235209 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_3fce0ef044cb1f17096a5bf85437e0db /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_3fce0ef044cb1f17096a5bf85437e0db >Result: >[negroni] Started GET /queue/4b8e4d46de1924a143cd62855eaeadee >[negroni] Completed 200 OK in 121.257µs >[negroni] Started GET /queue/c2abd38f6c55bb468ebcfd89d453fdc9 >[negroni] Completed 200 OK in 138.188µs >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_3fce0ef044cb1f17096a5bf85437e0db/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_0fb75c3a027f88c53839c0f9578a1801 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0fb75c3a027f88c53839c0f9578a1801 >Result: >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_00c2d7c5fda2de77f930386434235209 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_00c2d7c5fda2de77f930386434235209 >Result: >[negroni] Started GET /queue/8b3ba83547bd02c5fa03488e1de595ea >[negroni] Completed 200 OK in 144.592µs >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 110.753µs >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2002 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_3fce0ef044cb1f17096a5bf85437e0db/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_00c2d7c5fda2de77f930386434235209/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0fb75c3a027f88c53839c0f9578a1801/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_3fce0ef044cb1f17096a5bf85437e0db/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2002 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_00c2d7c5fda2de77f930386434235209/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2002 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0fb75c3a027f88c53839c0f9578a1801/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_a40c93ecae170d4b0e28378a7bf7aaee >Result: >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_00c2d7c5fda2de77f930386434235209/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0fb75c3a027f88c53839c0f9578a1801/brick >Result: >[cmdexec] INFO 2018/06/08 08:43:35 Creating volume vol_966a92ed3e4374a5e634f7b133c49e52 replica 3 >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_93847c4b6b28d7ed761f42d934f90771 >Result: >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7be0b70e69a87b262dc5f895d408cc1c >Result: >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_a40c93ecae170d4b0e28378a7bf7aaee --virtualsize 2097152K --name brick_a40c93ecae170d4b0e28378a7bf7aaee >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_a40c93ecae170d4b0e28378a7bf7aaee" created. >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_a40c93ecae170d4b0e28378a7bf7aaee >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_a40c93ecae170d4b0e28378a7bf7aaee isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_93847c4b6b28d7ed761f42d934f90771 --virtualsize 2097152K --name brick_93847c4b6b28d7ed761f42d934f90771 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_93847c4b6b28d7ed761f42d934f90771" created. >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_7be0b70e69a87b262dc5f895d408cc1c --virtualsize 2097152K --name brick_7be0b70e69a87b262dc5f895d408cc1c >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_7be0b70e69a87b262dc5f895d408cc1c" created. >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_a40c93ecae170d4b0e28378a7bf7aaee /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_a40c93ecae170d4b0e28378a7bf7aaee xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_93847c4b6b28d7ed761f42d934f90771 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_93847c4b6b28d7ed761f42d934f90771 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/4b8e4d46de1924a143cd62855eaeadee >[negroni] Completed 200 OK in 122.854µs >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_93847c4b6b28d7ed761f42d934f90771 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_93847c4b6b28d7ed761f42d934f90771 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/c2abd38f6c55bb468ebcfd89d453fdc9 >[negroni] Completed 200 OK in 123.429µs >[kubeexec] DEBUG 2018/06/08 08:43:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_a40c93ecae170d4b0e28378a7bf7aaee /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_a40c93ecae170d4b0e28378a7bf7aaee >Result: >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7be0b70e69a87b262dc5f895d408cc1c >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7be0b70e69a87b262dc5f895d408cc1c isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/8b3ba83547bd02c5fa03488e1de595ea >[negroni] Completed 200 OK in 101.176µs >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 147.794µs >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_a40c93ecae170d4b0e28378a7bf7aaee/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7be0b70e69a87b262dc5f895d408cc1c /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7be0b70e69a87b262dc5f895d408cc1c xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_93847c4b6b28d7ed761f42d934f90771 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_93847c4b6b28d7ed761f42d934f90771 >Result: >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2003 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_a40c93ecae170d4b0e28378a7bf7aaee/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_93847c4b6b28d7ed761f42d934f90771/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7be0b70e69a87b262dc5f895d408cc1c /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7be0b70e69a87b262dc5f895d408cc1c >Result: >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_a40c93ecae170d4b0e28378a7bf7aaee/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2003 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_93847c4b6b28d7ed761f42d934f90771/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7be0b70e69a87b262dc5f895d408cc1c/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_93847c4b6b28d7ed761f42d934f90771/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2003 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7be0b70e69a87b262dc5f895d408cc1c/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7be0b70e69a87b262dc5f895d408cc1c/brick >Result: >[cmdexec] INFO 2018/06/08 08:43:36 Creating volume vol_9c85de66c12db0a72b4d16fe888ff74d replica 3 >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume create vol_5db55c483f6984d9917cfe4b3c8b3cbc replica 3 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_f05028077b974ebc1f6621aee2184169/brick 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_214eb9006f9103530a1d0310d5f5dcfc/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4f4d753d298c99eac492c32006c74484/brick >Result: volume create: vol_5db55c483f6984d9917cfe4b3c8b3cbc: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_966a92ed3e4374a5e634f7b133c49e52 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_00c2d7c5fda2de77f930386434235209/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0fb75c3a027f88c53839c0f9578a1801/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_3fce0ef044cb1f17096a5bf85437e0db/brick >Result: volume create: vol_966a92ed3e4374a5e634f7b133c49e52: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:43:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_a4a6e4892da299f6c5634b8a2def697e replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b38349024b5350e969179b72b5c2af7c/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9775522b5bb908520213f17390c26d53/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bf9bfa3d8464d1e5476516d891583f10/brick >Result: volume create: vol_a4a6e4892da299f6c5634b8a2def697e: success: please start the volume to access data >[negroni] Started GET /queue/4b8e4d46de1924a143cd62855eaeadee >[negroni] Started GET /queue/c2abd38f6c55bb468ebcfd89d453fdc9 >[negroni] Completed 200 OK in 105.218µs >[negroni] Completed 200 OK in 72.027µs >[negroni] Started GET /queue/8b3ba83547bd02c5fa03488e1de595ea >[negroni] Completed 200 OK in 164.954µs >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 113.203µs >[negroni] Started GET /queue/4b8e4d46de1924a143cd62855eaeadee >[negroni] Completed 200 OK in 190.172µs >[negroni] Started GET /queue/c2abd38f6c55bb468ebcfd89d453fdc9 >[negroni] Completed 200 OK in 358.473µs >[negroni] Started GET /queue/8b3ba83547bd02c5fa03488e1de595ea >[negroni] Completed 200 OK in 103.274µs >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 109.645µs >[kubeexec] DEBUG 2018/06/08 08:43:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume start vol_5db55c483f6984d9917cfe4b3c8b3cbc >Result: volume start: vol_5db55c483f6984d9917cfe4b3c8b3cbc: success >[heketi] INFO 2018/06/08 08:43:38 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:43:38 asynchttp.go:292: Completed job 4b8e4d46de1924a143cd62855eaeadee in 6.974262608s >[negroni] Started GET /queue/4b8e4d46de1924a143cd62855eaeadee >[negroni] Completed 303 See Other in 117.468µs >[negroni] Started GET /queue/c2abd38f6c55bb468ebcfd89d453fdc9 >[negroni] Completed 200 OK in 54.349µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 4.956864ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 1.706283ms >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 2.706513ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.846939ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 926.005µs >[negroni] Started GET /queue/8b3ba83547bd02c5fa03488e1de595ea >[negroni] Completed 200 OK in 155.084µs >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 169.555µs >[negroni] Started GET /queue/c2abd38f6c55bb468ebcfd89d453fdc9 >[negroni] Completed 200 OK in 105.106µs >[negroni] Started GET /queue/8b3ba83547bd02c5fa03488e1de595ea >[negroni] Completed 200 OK in 126.275µs >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 124.025µs >[negroni] Started GET /queue/c2abd38f6c55bb468ebcfd89d453fdc9 >[negroni] Completed 200 OK in 107.003µs >[negroni] Started GET /queue/8b3ba83547bd02c5fa03488e1de595ea >[negroni] Completed 200 OK in 106.63µs >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 109.519µs >[negroni] Started GET /queue/c2abd38f6c55bb468ebcfd89d453fdc9 >[negroni] Completed 200 OK in 119.192µs >[negroni] Started GET /queue/8b3ba83547bd02c5fa03488e1de595ea >[negroni] Completed 200 OK in 120.306µs >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 81.991µs >[negroni] Started GET /queue/c2abd38f6c55bb468ebcfd89d453fdc9 >[negroni] Completed 200 OK in 145.414µs >[negroni] Started GET /queue/8b3ba83547bd02c5fa03488e1de595ea >[negroni] Completed 200 OK in 135.486µs >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 124.179µs >[kubeexec] DEBUG 2018/06/08 08:43:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_a4a6e4892da299f6c5634b8a2def697e >Result: volume start: vol_a4a6e4892da299f6c5634b8a2def697e: success >[kubeexec] DEBUG 2018/06/08 08:43:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume create vol_9c85de66c12db0a72b4d16fe888ff74d replica 3 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_a40c93ecae170d4b0e28378a7bf7aaee/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_93847c4b6b28d7ed761f42d934f90771/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7be0b70e69a87b262dc5f895d408cc1c/brick >Result: volume create: vol_9c85de66c12db0a72b4d16fe888ff74d: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:43:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_966a92ed3e4374a5e634f7b133c49e52 >Result: volume start: vol_966a92ed3e4374a5e634f7b133c49e52: success >[heketi] INFO 2018/06/08 08:43:43 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:43:43 asynchttp.go:292: Completed job c2abd38f6c55bb468ebcfd89d453fdc9 in 12.043553614s >[heketi] INFO 2018/06/08 08:43:43 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:43:43 asynchttp.go:292: Completed job 8b3ba83547bd02c5fa03488e1de595ea in 11.970530637s >[negroni] Started GET /queue/c2abd38f6c55bb468ebcfd89d453fdc9 >[negroni] Completed 303 See Other in 175.424µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 3.966686ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 326.02µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 3.076164ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.189707ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 852.583µs >[negroni] Started GET /queue/8b3ba83547bd02c5fa03488e1de595ea >[negroni] Completed 303 See Other in 150.576µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 2.000867ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 651.682µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 1.215816ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.373329ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 1.043432ms >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 113.014µs >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 189.883µs >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 195.022µs >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 200 OK in 113.379µs >[kubeexec] DEBUG 2018/06/08 08:43:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume start vol_9c85de66c12db0a72b4d16fe888ff74d >Result: volume start: vol_9c85de66c12db0a72b4d16fe888ff74d: success >[heketi] INFO 2018/06/08 08:43:47 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:43:47 asynchttp.go:292: Completed job 5aa78dda299ed08cbb39d86264fe4af8 in 15.197281898s >[negroni] Started GET /queue/5aa78dda299ed08cbb39d86264fe4af8 >[negroni] Completed 303 See Other in 277.638µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 5.052531ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 330.844µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 3.654089ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.439958ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 1.535745ms >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 2.538844ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.062205ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.312156ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 2.361872ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 958.309µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 608.712µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 617.882µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 646.398µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 9.22408ms >[asynchttp] INFO 2018/06/08 08:43:49 asynchttp.go:288: Started job a9e6acc2efeee7febda192bc9c56b54e >[heketi] INFO 2018/06/08 08:43:49 Started async operation: Delete Volume >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 127.611µs >[kubeexec] DEBUG 2018/06/08 08:43:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started DELETE /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 202 Accepted in 12.841174ms >[asynchttp] INFO 2018/06/08 08:43:49 asynchttp.go:288: Started job f57802d82a15df4af9a77cc5ba0a7516 >[heketi] INFO 2018/06/08 08:43:49 Started async operation: Delete Volume >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 124.402µs >[kubeexec] DEBUG 2018/06/08 08:43:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_966a92ed3e4374a5e634f7b133c49e52 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started DELETE /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 202 Accepted in 14.556322ms >[asynchttp] INFO 2018/06/08 08:43:50 asynchttp.go:288: Started job 9e1718475edb952ba52dafee886587f8 >[heketi] INFO 2018/06/08 08:43:50 Started async operation: Delete Volume >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 98.953µs >[kubeexec] DEBUG 2018/06/08 08:43:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script snapshot list vol_a4a6e4892da299f6c5634b8a2def697e --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 14.948979ms >[asynchttp] INFO 2018/06/08 08:43:50 asynchttp.go:288: Started job c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[heketi] INFO 2018/06/08 08:43:50 Started async operation: Delete Volume >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 124.223µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 207.412µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 179.674µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 112.11µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 229.202µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 102.411µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 133.541µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 177.485µs >[kubeexec] DEBUG 2018/06/08 08:43:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force >Result: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: success >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 110.846µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 226.88µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 237.808µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 219.919µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 156.179µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 121.306µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 109.043µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 174.997µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 140.505µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 156.011µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 148.046µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 120.631µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 133.628µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 209.746µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 120.355µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 207.735µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 212.109µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 252.651µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 256.095µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 222.014µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 138.624µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 169.923µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 146.494µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 161.973µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 200.904µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 189.675µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 153.76µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 300.024µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 320.017µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 328.938µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 160.973µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 215.252µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 167.169µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 241.928µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 165.559µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 171.522µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 232.389µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 166.223µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 241.272µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 151.22µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 169.052µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 130.712µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 257.041µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 232.255µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 170.155µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 226.248µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 216.674µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 217.629µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 124.54µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 217.397µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 159.813µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 362.377µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 168.306µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 319.332µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 160.194µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 133.269µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 161.241µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 181.182µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 227.309µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 182.954µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 341.967µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 150.522µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 133.413µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 191.721µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 174.652µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 251.544µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 135.366µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 151.117µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 316.698µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 169.152µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 260.728µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 189.76µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 184.871µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 290.991µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 126.134µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 224.298µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 215.463µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 156.787µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 266.745µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 160.754µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 182.809µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 179.125µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 149.954µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 128.896µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 144.467µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 201.6µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 158.302µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 122.665µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 259.282µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 243.035µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 229.518µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 213.478µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 242.858µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 329.618µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 178.859µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 131.523µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 127.52µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 147.927µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 141.333µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 137.095µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 232.381µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 228.415µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 161.679µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 210.048µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 123.715µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 135.003µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 210.141µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 175.529µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 200.957µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 222.491µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 184.896µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 159.19µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 271.415µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 305.374µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 193.436µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 297.629µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 181.338µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 333.244µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 162.359µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 236.324µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 143.506µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 224.423µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 202.974µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 300.438µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 214.945µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 229.168µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 252.732µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 206.972µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 184.643µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 293.681µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 158.177µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 169.658µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 209.835µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 305.28µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 144.001µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 174.08µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 121.724µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 279.108µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 452.384µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 205.316µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 182.169µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 278.914µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 230.077µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 217.995µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 182.534µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 340.678µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 218.938µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 154.049µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 159.909µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 261.655µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 189.189µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 168.218µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 196.873µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 181.839µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 253.325µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 126.594µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 192.685µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 147.446µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 152.24µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 131.437µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 250.551µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 143.795µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 207.975µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 238.858µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 385.164µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 193.76µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 204.528µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 177.637µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 196.732µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 204.713µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 167.677µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 201.623µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 206.295µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 243.645µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 269.058µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 190.169µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 174.773µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 312.301µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 192.505µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 214.602µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 138.957µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 239.626µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 143.753µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 210.859µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 155.345µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 146.727µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 132.845µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 269.045µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 148.1µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 219.148µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 189.071µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 118.583µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 224.935µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 288.851µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 288.073µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 266.641µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 260.13µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 138.836µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 168.518µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 191.066µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 244.764µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 141.441µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 127.683µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 223.742µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 171.606µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 186.196µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 132.045µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 117.983µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 171.095µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 125.475µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 250.191µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 117.397µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 270.057µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 169.759µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 235.021µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 334.088µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 236.938µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 125.338µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 157.496µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 210.669µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 262.948µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 340.429µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 234.945µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 294.998µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 197.205µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 167.126µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 175.974µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 200.472µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 159.507µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 244.934µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 183.159µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 155.379µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 197.546µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 185.506µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 175.429µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 205.362µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 277.448µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 182.524µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 143.032µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 259.746µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 220.042µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 230.652µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 130.256µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 156.32µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 169.678µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 279.391µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 209.581µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 163.748µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 160.806µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 316.521µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 225.071µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 174.254µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 181.541µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 208.052µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 176.414µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 203.325µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 189.769µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 195.869µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 210.024µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 236.465µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 199.702µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 246.605µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 224.245µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 163.929µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 131.199µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 121.321µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 205.462µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 176.089µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 199.786µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 186.739µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 241.805µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 224.875µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 186.049µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 217.345µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 195.425µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 196.676µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 184.162µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 155.239µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 135.218µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 195.277µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 269.497µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 179.608µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 195.632µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 197.485µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 182.045µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 168.938µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 160.899µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 204.996µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 172.337µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 177.422µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 164.898µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 182.324µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 130.044µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 164.738µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 272.134µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 198.461µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 169.24µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 300.231µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 126.726µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 331.957µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 156.487µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 143.978µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 173.842µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 288.765µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 187.974µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 211.11µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 171.129µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 193.861µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 184.399µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 204.611µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 138.189µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 170.665µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 213.938µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 136.423µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 195.62µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 187.969µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 230.758µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 125.096µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 137.442µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 184.252µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 152.421µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 139.175µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 154.385µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 192.699µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 260.685µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 135.279µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 134.615µs >[heketi] INFO 2018/06/08 08:45:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:45:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 218.605µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 178.972µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 257.328µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 201.894µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 189.896µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 196.585µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 166.128µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 186.989µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 151.383µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 202.835µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 206.771µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 154.105µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 144.823µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 152.133µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 192.308µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 140.64µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 180.522µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 181.926µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 138.539µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 168.981µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 223.706µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 138.241µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 189.362µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 178.769µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 136.853µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 229.081µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 143.317µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 132.588µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 180.339µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 127.46µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 195.921µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 128.754µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 142.627µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 178.137µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 140.397µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 204.489µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 198.329µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 198.333µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 187.765µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 195.249µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 130.032µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 156.025µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 254.158µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 133.087µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 144.057µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 193.479µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 186.359µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 138.279µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 214.829µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 251.429µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 208.315µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 198.468µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 171.336µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 228.726µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 223.328µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 179.538µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 128.972µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 134.495µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 118.465µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 178.701µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 202.722µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 195.119µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 256.945µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 191.476µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 188.015µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 236.265µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 236.444µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 160.603µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 149.356µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 191.085µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 165.162µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 144.014µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 333.133µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 200.586µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 188.358µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 199.798µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 137.359µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 187.492µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 199.585µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 133.154µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 162.62µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 189.394µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 161.227µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 206.792µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 193.912µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 212.03µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 207.092µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 210.206µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 143.664µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 182.162µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 206.222µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 176.659µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 174.972µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 180.171µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 264.185µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 187.402µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 177.153µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 203.531µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 138.63µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 116.726µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 145.239µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 127.347µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 173.372µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 190.532µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 166.086µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 179.068µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 127.077µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 183.163µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 128.977µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 186.929µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 123.28µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 277.78µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 185.222µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 183.236µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 214.844µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 195.955µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 207.345µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 197.841µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 190.552µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 221.899µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 153.377µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 136.719µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 134.813µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 284.555µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 231.605µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 221.096µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 178.222µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 183.983µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 128.069µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 169.501µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 199.142µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 178.525µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 149.048µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 197.282µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 177.162µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 207.103µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 171.659µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 268.171µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 200.662µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 168.539µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 162.731µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 177.207µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 160.013µs >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 200 OK in 194.973µs >[kubeexec] ERROR 2018/06/08 08:45:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_966a92ed3e4374a5e634f7b133c49e52 force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout [Error : Request timed out >]: Stderr [] >[cmdexec] ERROR 2018/06/08 08:45:50 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_966a92ed3e4374a5e634f7b133c49e52: Unable to execute command on glusterfs-storage-pg4xc: >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 200 OK in 262.561µs >[kubeexec] DEBUG 2018/06/08 08:45:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[kubeexec] ERROR 2018/06/08 08:45:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_a4a6e4892da299f6c5634b8a2def697e force] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout [Error : Request timed out >]: Stderr [] >[cmdexec] ERROR 2018/06/08 08:45:50 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 325.131µs >[kubeexec] DEBUG 2018/06/08 08:45:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 22min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ14272 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:45:50 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:45:50 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 4.141309ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.259317ms >[kubeexec] ERROR 2018/06/08 08:45:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_966a92ed3e4374a5e634f7b133c49e52] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_966a92ed3e4374a5e634f7b133c49e52: failed: Another transaction is in progress for vol_966a92ed3e4374a5e634f7b133c49e52. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:45:50 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_966a92ed3e4374a5e634f7b133c49e52: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_966a92ed3e4374a5e634f7b133c49e52: failed: Another transaction is in progress for vol_966a92ed3e4374a5e634f7b133c49e52. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:45:50 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_966a92ed3e4374a5e634f7b133c49e52: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_966a92ed3e4374a5e634f7b133c49e52: failed: Another transaction is in progress for vol_966a92ed3e4374a5e634f7b133c49e52. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:45:50 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_966a92ed3e4374a5e634f7b133c49e52: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_966a92ed3e4374a5e634f7b133c49e52: failed: Another transaction is in progress for vol_966a92ed3e4374a5e634f7b133c49e52. Please try again after sometime. >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.67263ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[asynchttp] INFO 2018/06/08 08:45:50 asynchttp.go:292: Completed job f57802d82a15df4af9a77cc5ba0a7516 in 2m0.912541295s >[heketi] ERROR 2018/06/08 08:45:50 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_966a92ed3e4374a5e634f7b133c49e52: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_966a92ed3e4374a5e634f7b133c49e52: failed: Another transaction is in progress for vol_966a92ed3e4374a5e634f7b133c49e52. Please try again after sometime. >[negroni] Completed 200 OK in 5.005378ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 2.61157ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 985.012µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.843781ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.16153ms >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 176.224µs >[kubeexec] ERROR 2018/06/08 08:45:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_a4a6e4892da299f6c5634b8a2def697e] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:45:50 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:45:50 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:45:50 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:45:50 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[asynchttp] INFO 2018/06/08 08:45:50 asynchttp.go:292: Completed job 9e1718475edb952ba52dafee886587f8 in 2m0.812423196s >[negroni] Started GET /queue/f57802d82a15df4af9a77cc5ba0a7516 >[negroni] Completed 500 Internal Server Error in 269.925µs >[negroni] Started GET /queue/9e1718475edb952ba52dafee886587f8 >[negroni] Completed 500 Internal Server Error in 240.441µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 168.267µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 200 OK in 254.601µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 4.647934ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 3.224815ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 900.977µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.561068ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.16295ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.051042ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 650.239µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 829.271µs >[kubeexec] ERROR 2018/06/08 08:45:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout [Error : Request timed out >]: Stderr [] >[cmdexec] ERROR 2018/06/08 08:45:52 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: >[heketi] ERROR 2018/06/08 08:45:52 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: >[heketi] ERROR 2018/06/08 08:45:52 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: >[heketi] ERROR 2018/06/08 08:45:52 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: >[asynchttp] INFO 2018/06/08 08:45:52 asynchttp.go:292: Completed job a9e6acc2efeee7febda192bc9c56b54e in 2m2.802710603s >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 152.63µs >[negroni] Started GET /queue/a9e6acc2efeee7febda192bc9c56b54e >[negroni] Completed 500 Internal Server Error in 202.237µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 5.031912ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 3.232661ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.065085ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.854986ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.402887ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 632.554µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 588.298µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 612.95µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 204.085µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 563.032µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.145084ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 649.686µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.014036ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.080402ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.029899ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 611.997µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 619.701µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 170.884µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 391.651µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 682.261µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.074255ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.016093ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 651.301µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 975.498µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 646.049µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 716.153µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 232.702µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 719.881µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 979.321µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 678.986µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 637.027µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.067369ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 680.466µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 607.672µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 867.089µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 146.905µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 244.924µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 565.429µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.084175ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 733.554µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 662.225µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.075006ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 613.926µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 609.461µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.019185ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 183.301µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 464.719µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 757.623µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 662.498µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 714.106µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 949.721µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 589.82µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 597.661µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 565.27µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 210.275µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 584.379µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 650.908µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 607.767µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.232287ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 739.407µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.084132ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.102279ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.018209ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 231.208µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 540.379µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.083562ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.451779ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 657.63µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 615.866µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 566.449µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 578.236µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.101095ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 127.067µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 376.235µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.027918ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 631.478µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 616.632µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 554.297µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 519.156µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.057909ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 902.002µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 222.302µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 362.74µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 883.981µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.072649ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 607.004µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 553.284µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 602.305µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 736.442µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 975.219µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 139.034µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 183.305µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 485.59µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.088546ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 986.039µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 604.102µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 596.673µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 635.7µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 910.684µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 631.467µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 236.552µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Started DELETE /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Started DELETE /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 202 Accepted in 13.088735ms >[asynchttp] INFO 2018/06/08 08:46:05 asynchttp.go:288: Started job b7642ac5823fb25c80d83504800b9c18 >[heketi] INFO 2018/06/08 08:46:05 Started async operation: Delete Volume >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 120.479µs >[negroni] Completed 202 Accepted in 19.196566ms >[asynchttp] INFO 2018/06/08 08:46:05 asynchttp.go:288: Started job 9abffd27d4b48c8f2f0e699632a69408 >[heketi] INFO 2018/06/08 08:46:05 Started async operation: Delete Volume >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 168.816µs >[negroni] Completed 202 Accepted in 34.74259ms >[asynchttp] INFO 2018/06/08 08:46:05 asynchttp.go:288: Started job c2b9e81466aab79d146584d4fe207629 >[heketi] INFO 2018/06/08 08:46:05 Started async operation: Delete Volume >[negroni] Started GET /volumes >[negroni] Started GET /queue/c2b9e81466aab79d146584d4fe207629 >[negroni] Completed 200 OK in 150.257µs >[negroni] Completed 200 OK in 5.229066ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 2.30868ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 578.366µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.223012ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 766.437µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.00878ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.009005ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 938.057µs >[kubeexec] DEBUG 2018/06/08 08:46:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script snapshot list vol_a4a6e4892da299f6c5634b8a2def697e --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[kubeexec] ERROR 2018/06/08 08:46:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_a4a6e4892da299f6c5634b8a2def697e force] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:46:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume stop: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[kubeexec] ERROR 2018/06/08 08:46:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_a4a6e4892da299f6c5634b8a2def697e] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:46:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:46:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:46:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:46:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[asynchttp] INFO 2018/06/08 08:46:06 asynchttp.go:292: Completed job c2b9e81466aab79d146584d4fe207629 in 734.184107ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 261.631µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 208.383µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 236.031µs >[negroni] Started GET /queue/c2b9e81466aab79d146584d4fe207629 >[negroni] Completed 500 Internal Server Error in 164.424µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 4.683236ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 2.317349ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 617.533µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.443724ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.162033ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 638.136µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 616.874µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 704.529µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 213.59µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 186.308µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 106.156µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 839.256µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.195678ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 574.715µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 591.762µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 564.713µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.022235ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 625.741µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.050923ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 190.028µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 252.135µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 179.351µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 831.31µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 703.385µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.017908ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.002099ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 566.486µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.101944ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 939.489µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 575.798µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 253.502µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 144.905µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 152.279µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 571.554µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.035099ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.020013ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 560.316µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 952.716µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 946.496µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 624.052µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 965.609µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 152.505µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 132.285µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 146.692µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 669.83µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 778.46µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 567.097µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 633.68µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 766.004µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 552.794µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 516.722µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 544.041µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 119.93µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 129.471µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 152.885µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 160.691µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 569.031µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 639.385µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 641.486µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 703.061µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 574.185µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 567.411µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 550.058µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 610.105µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 127.713µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 131.18µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 105.289µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 479.773µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 95.276µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 641.926µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 628.738µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 555.063µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 538.253µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 84.583µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 509.925µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 553.636µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 494.643µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 183.271µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 159.222µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 131.675µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 794.461µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.090269ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.018193ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 568.112µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 628.784µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 603.919µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 548.853µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 509.522µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 144.205µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 163.576µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 93.335µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 565.676µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 639.807µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 680.529µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 566.983µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 525.289µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 621.788µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 969.106µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.067702ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 247.362µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 150.585µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 129.397µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 575.053µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 670.526µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 641.303µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 594.072µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 961.946µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 961.273µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 961.86µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 937.087µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 191.879µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 191.414µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 171.102µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 911.279µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.140928ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 554.184µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 574.557µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 613.75µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 519.888µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 637.361µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 984.999µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 236.689µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 180.336µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 210.915µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 819.378µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 686.43µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 652.871µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 741.787µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 583.776µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 559.237µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.602027ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 602.945µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 194.425µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 139.839µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 188.791µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 193.535µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 844.984µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 985.653µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.031582ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 951.803µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 998.125µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 957.119µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 949.097µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 941.996µs >[negroni] Started DELETE /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 202 Accepted in 12.029817ms >[asynchttp] INFO 2018/06/08 08:46:20 asynchttp.go:288: Started job 6c21d7f239114dec28a40bfe0657a19c >[heketi] INFO 2018/06/08 08:46:20 Started async operation: Delete Volume >[negroni] Started GET /queue/6c21d7f239114dec28a40bfe0657a19c >[negroni] Completed 200 OK in 145.409µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 153.185µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 131.789µs >[kubeexec] DEBUG 2018/06/08 08:46:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script snapshot list vol_a4a6e4892da299f6c5634b8a2def697e --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[kubeexec] ERROR 2018/06/08 08:46:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_a4a6e4892da299f6c5634b8a2def697e force] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:46:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume stop: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[kubeexec] ERROR 2018/06/08 08:46:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_a4a6e4892da299f6c5634b8a2def697e] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:46:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:46:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:46:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:46:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[asynchttp] INFO 2018/06/08 08:46:21 asynchttp.go:292: Completed job 6c21d7f239114dec28a40bfe0657a19c in 765.629949ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 185.697µs >[negroni] Started GET /queue/6c21d7f239114dec28a40bfe0657a19c >[negroni] Completed 500 Internal Server Error in 191.963µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 187.633µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 137.899µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 4.393522ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 2.420794ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.039195ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.213887ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.594482ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 587.059µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 555.121µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 686.892µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 221.374µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 225.385µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 118.653µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 508.506µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 674.882µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 578.674µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 613.37µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 529.405µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 554.05µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 503.13µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 618.491µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 229.668µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 253.145µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 197.751µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 563.918µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 620.833µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 593.499µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 739.114µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 532.603µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 613.599µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 620.727µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 530.132µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 292.614µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 238.508µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 144.052µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 591.761µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 626.99µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 497.951µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 533.577µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 531.088µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 529.725µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 548.326µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 526.452µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 110.815µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 122.874µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 120.007µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 778.854µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.126481ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.122432ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.002803ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.022679ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 953.003µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 950.043µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 956.382µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 139.058µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 199.025µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 189.104µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 787.87µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 918.316µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 677.649µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 657.037µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 811.872µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.222931ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 989.828µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 548.559µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 140.615µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 130.509µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 91.266µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 127.641µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 850.557µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.076632ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.080738ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 935.134µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 676.564µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 760.419µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 592.732µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 535.488µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 214.651µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 116.652µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 142.83µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 242.805µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 150.166µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 772.214µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 715.584µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.035412ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 605.39µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 953.412µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 948.4µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 605.528µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 592.063µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 231.658µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 175.761µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 139.323µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 590.305µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 794.747µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.105515ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.037376ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 960.283µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 591.601µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 953.029µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 599.758µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 122.716µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 116.703µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 121.385µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 521.11µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 650.913µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 662.305µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 549.688µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 585.502µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.036012ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 522.165µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 942.88µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 196.881µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 212.604µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 140.126µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 800.444µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.036529ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 768.725µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 734.946µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 964.186µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 634.307µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 618.777µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 559.525µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 212.058µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 242.422µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 182.461µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 816.337µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.056432ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 661.439µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 869.24µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 659.745µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 598.067µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 609.744µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 903.917µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 196.301µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 236.199µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 140.206µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 521.13µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 707.604µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 626.534µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 531.411µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 678.454µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 806.589µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 547.144µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 985.356µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 78.633µs >[negroni] Started DELETE /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 202 Accepted in 14.365264ms >[asynchttp] INFO 2018/06/08 08:46:35 asynchttp.go:288: Started job ad59b27cce2d3a9ca1ce2455f01a9e56 >[heketi] INFO 2018/06/08 08:46:35 Started async operation: Delete Volume >[negroni] Started GET /queue/ad59b27cce2d3a9ca1ce2455f01a9e56 >[negroni] Completed 200 OK in 113.935µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 205.086µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 135.553µs >[kubeexec] DEBUG 2018/06/08 08:46:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script snapshot list vol_a4a6e4892da299f6c5634b8a2def697e --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[kubeexec] ERROR 2018/06/08 08:46:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_a4a6e4892da299f6c5634b8a2def697e force] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:46:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume stop: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[kubeexec] ERROR 2018/06/08 08:46:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_a4a6e4892da299f6c5634b8a2def697e] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:46:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:46:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:46:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[asynchttp] INFO 2018/06/08 08:46:36 asynchttp.go:292: Completed job ad59b27cce2d3a9ca1ce2455f01a9e56 in 716.649398ms >[heketi] ERROR 2018/06/08 08:46:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 201.839µs >[negroni] Started GET /queue/ad59b27cce2d3a9ca1ce2455f01a9e56 >[negroni] Completed 500 Internal Server Error in 238.028µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 4.459436ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 2.778976ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 712.247µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.907219ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.229608ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 975.772µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 579.053µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 595.698µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 104.455µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 153.865µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 204.045µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 137.959µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 140.31µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 520.563µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 645.413µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 715.066µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 561.328µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 625.166µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 560.46µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 536.027µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 494.003µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 245.775µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 192.198µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 134.332µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.054482ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 752.544µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.016739ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 621.62µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 706.491µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 631.691µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 622.07µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 537.1µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 195.045µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 221.352µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 130.272µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 545.102µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 2.597689ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.558791ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.018081ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 985.408µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 612.995µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 579.937µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 642.539µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 196.009µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 115.736µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 89.622µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 783.967µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.100204ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.035712ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 979.463µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.016688ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.711904ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.004307ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 692.348µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 204.136µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 168.757µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 135.622µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 828.197µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 668.475µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 696.948µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.040629ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 592.382µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.00385ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.000446ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 966.282µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 208.848µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 441.09µs >[negroni] Completed 200 OK in 153.759µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 177.056µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 592.912µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 684.186µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 630.346µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 593.171µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 620.421µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 579.422µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.073398ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 593.293µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 204.201µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 179.435µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 164.678µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 688.452µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 133.787µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 99.129µs >[negroni] Completed 200 OK in 1.118348ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 628.109µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 973.446µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 578.484µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 953.863µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 560.39µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.032939ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 190.501µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 206.663µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 122.283µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 494.264µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 944.252µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.027796ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.012799ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 613.106µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 544.636µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 605.065µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.008816ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 169.563µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 177.981µs >[negroni] Completed 200 OK in 98.851µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 806.861µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.033725ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.061169ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.038749ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 579.513µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.014536ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 542.434µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.690267ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 207.435µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 212.891µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 130.459µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 826.024µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.022265ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 993.483µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 675.877µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 597.785µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 613.416µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.043653ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 858.8µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 176.066µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 127.034µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 444.92µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 495.313µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.058315ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 661.788µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 587.891µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.035465ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 560.356µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.018355ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 574.644µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 113.253µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 156.013µs >[negroni] Completed 200 OK in 82.259µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 631.316µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 785.149µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.034606ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 998.099µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 988.449µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.135268ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.125204ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 996.376µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 135.389µs >[negroni] Started DELETE /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 202 Accepted in 14.017764ms >[asynchttp] INFO 2018/06/08 08:46:50 asynchttp.go:288: Started job 548c6fbe203bb93f9a4ecb56f976926d >[heketi] INFO 2018/06/08 08:46:50 Started async operation: Delete Volume >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 123.606µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 145.387µs >[negroni] Completed 200 OK in 243.122µs >[kubeexec] DEBUG 2018/06/08 08:46:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script snapshot list vol_a4a6e4892da299f6c5634b8a2def697e --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 157.171µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 4.074252ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 2.334264ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 590.107µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.045169ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 560.226µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 566.836µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 611.696µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 584.324µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 136.809µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 172.255µs >[negroni] Completed 200 OK in 197.878µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 183.049µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 114.82µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 165.376µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 135.815µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 649.55µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.129071ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 589.095µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 607.804µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 505.81µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 511.676µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.008542ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 993.062µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 124.091µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 196.935µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 287.464µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 124.552µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 940.132µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.029012ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 595.439µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.046699ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 707.687µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 609.315µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 582.529µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 515.405µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 207.764µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 155.378µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 209.335µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 130.996µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 970µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 671.03µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 596.38µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 985.176µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.002518ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 954.732µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.054805ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.025096ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 195.848µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 121.995µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 116.328µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 55.359µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 932.087µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.575072ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.131005ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.196014ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.061629ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.038972ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 686.215µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.017275ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 182.286µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 204.726µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 191.665µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 161.276µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 891.027µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.091652ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 987.96µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.079111ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.611038ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.012445ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 980.035µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.125475ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 188.959µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 132.409µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 134.167µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 153.955µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 575.637µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 635.953µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 955.728µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.063432ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 566.674µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 995.443µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 988.623µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.023989ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 165.759µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 187.588µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 148.757µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 150.096µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 177.758µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 219.395µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.022355ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.690588ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.119769ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.083805ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.079512ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 577.963µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.058226ms >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 188.476µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 105.012µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.071898ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 195.469µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 199.445µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 246.548µs >[negroni] Completed 200 OK in 247.528µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 605.927µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 707.347µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 823.272µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.013833ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 911.46µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.085694ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.00098ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 548.553µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 142.299µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 122.103µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 109.32µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 188.409µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 986.823µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 614.042µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.023169ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 600.265µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 522.514µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 582.002µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.031765ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 614.907µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 200.152µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 178.166µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 173.513µs >[negroni] Completed 200 OK in 272.181µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 634.259µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.536631ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 657.378µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.017186ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.030635ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 983.263µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 984.436µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 589.818µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 193.162µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 165.686µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 121.406µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 287.75µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 601.371µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 747.772µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 651.729µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 623.863µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.058888ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 573.392µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 624.583µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 618.316µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 133.171µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 168.152µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 289.295µs >[negroni] Completed 200 OK in 96.065µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 573.486µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.087331ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.120225ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.117959ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.023028ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 643.242µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 760.283µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 593.113µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 260.135µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 257.695µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 231.708µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 98.785µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 148.487µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 908.349µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 700.626µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 617.964µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 603.393µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.043849ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 548.994µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 187.449µs >[negroni] Completed 200 OK in 1.532405ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.001869ms >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 188.275µs >[negroni] Completed 200 OK in 142.029µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 189.512µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 246.128µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 152.065µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 130.515µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 601.656µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 666.892µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.076178ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 541.041µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 586.499µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 575.669µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 555.516µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 570.495µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 206.245µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 267.678µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 210.529µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 232.925µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 960.917µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.018895ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 858.291µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.110798ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 934.722µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 919.39µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 583.319µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 608.084µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 181.933µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 229.155µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 181.906µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 146.363µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 951.149µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 585.166µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 625.603µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 591.757µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 625.258µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 599.178µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 688.404µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 528.94µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 130.839µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 224.551µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 159.868µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 136.606µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.167088ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.341673ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 826.7µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 629.493µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 901.638µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 639.394µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 547.664µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 742.775µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 156.999µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 192.211µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 130.168µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 100.192µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 583.25µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.113628ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 948.374µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 807.086µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 574.664µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 526.422µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 593.58µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 556.067µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 138.548µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 132.733µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 147.323µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 177.392µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 636.23µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 822.224µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 624.957µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 602.352µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 575.373µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.033009ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 573.229µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 988.983µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 178.162µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 282.961µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 308.864µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 198.935µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 214.345µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.547762ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 674.327µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 621.436µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 614.415µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 689.122µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 599.95µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 936.493µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 866.446µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 124.599µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 167.595µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 167.016µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 153.423µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 186.232µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 286.638µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 122.819µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 539.57µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.152894ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 995.629µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 952.6µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.060329ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 909.673µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.002477ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 624.378µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 145.291µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 186.385µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 133.983µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 83.928µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 595.215µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 772.008µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 732.897µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 988.355µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 553.462µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 750.47µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 975.295µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 723.548µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 191.518µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 160.55µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 292.325µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 293.752µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 918.31µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.111338ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 978.763µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 677.636µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.038005ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 998.216µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 650.317µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 910.833µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 128.056µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 227.408µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 187.668µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 111.823µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 594.356µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 635.952µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 600.412µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 578.914µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 540.307µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 577.436µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 553.017µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 619.372µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 145.053µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 141.731µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 145.273µs >[negroni] Completed 200 OK in 212.383µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 930.855µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.193394ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 960.853µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.072796ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.03631ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 605.745µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 580.157µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 986.715µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 202.885µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 255.128µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 156.256µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 114.321µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 610.632µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 788.599µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 710.807µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 879.589µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 957.052µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 543.043µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 892.117µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 615.927µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 112.1µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 313.161µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 128.542µs >[negroni] Completed 200 OK in 198.501µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 149.833µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 128.73µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.225188ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 604.544µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 641.046µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 559.161µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 537.842µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 573.403µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 619.807µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 571.319µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 122.877µs >[negroni] Completed 200 OK in 176.504µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 232.551µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 152.781µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 123.417µs >[negroni] Completed 200 OK in 135.644µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 942.669µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.082846ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.048012ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 957.723µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 829.56µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 949.333µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 545.149µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.052442ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 190.748µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 184.488µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 194.849µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 377.146µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 959.755µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.142902ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 648.118µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 692.59µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 555.376µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.006539ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.027479ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.038825ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 197.265µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 195.525µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 108.535µs >[negroni] Completed 200 OK in 178.496µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 820.154µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.340093ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 674.656µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 838.119µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 620.554µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 731.268µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 628.427µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 589.119µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 190.886µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 190.785µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 186.319µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 202.645µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.398007ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 885.105µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 796.785µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 988.689µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 773.696µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 951.801µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 565.966µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 555.261µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 162.508µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 188.709µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 152.66µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 96.743µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 594.557µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.018422ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 820.029µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 629.595µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 611.47µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 757.565µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 566.483µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 531.38µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 162.568µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 131.375µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 153.663µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 118.513µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 181.139µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.028332ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 637.757µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.035215ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 975.26µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 588.818µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 578.038µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 860.668µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.199304ms >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 131.156µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 173.059µs >[negroni] Completed 200 OK in 144.218µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 161.361µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 230.785µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 899.853µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 212.821µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 146.892µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 592.65µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 648.584µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 595.479µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 593.516µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 564.267µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 922.334µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.037502ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 244.598µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 147.176µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 172.372µs >[negroni] Completed 200 OK in 219.834µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 630.548µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.075931ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 609.092µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 545.226µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 580.902µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 573.239µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 521.45µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 565.484µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 279.698µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 139.093µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 254.161µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 157.349µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.016523ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 656.733µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 611.145µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.005402ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 970.642µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 532.566µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 991.753µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 988.363µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 182.449µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 197.599µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 196.114µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 448.059µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 646.854µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.061232ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.022572ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 539.342µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 598.678µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 798.267µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 534.665µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 533.657µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 157.197µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 150.021µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 139.954µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 358.441µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 966.076µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.053158ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 957.79µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.223814ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 965.129µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.057972ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 973.756µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 740.081µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 180.586µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 140.305µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 110.99µs >[negroni] Completed 200 OK in 158.665µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 559.651µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 645.258µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 513.697µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 634.103µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 514.885µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 619.645µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 961.826µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 951.587µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 187.228µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 140.665µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 134.574µs >[negroni] Completed 200 OK in 313.565µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 201.282µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.059888ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 119.02µs >[negroni] Completed 200 OK in 1.536351ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 615.673µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 991.833µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 654.782µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.259748ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 528.994µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.026582ms >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 148.815µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 88.149µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 208.561µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 254.208µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 297.567µs >[negroni] Completed 200 OK in 137.026µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.043362ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.073615ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.083085ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 781.405µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 647.744µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.057475ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 766.82µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 947.493µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 217.512µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 197.166µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 217.942µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 209.486µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.054129ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 993.853µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 764.055µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 994.532µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 995.097µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 590.653µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 784.783µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 575.773µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 120.51µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 132.643µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 159.812µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 151.022µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 668.485µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.049398ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 615.663µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 974.546µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 982.29µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.396029ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 564.373µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 971.953µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 175.51µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 171.712µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 172.139µs >[negroni] Completed 200 OK in 275.934µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 944.65µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.021462ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 968.059µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 553.671µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 580.87µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 709.094µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 600.409µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.204791ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 135.547µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 189.025µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 188.786µs >[negroni] Completed 200 OK in 284.79µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 590.67µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 623.875µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 624.437µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 561.732µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 980.68µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 578.035µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 921.916µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 555.12µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 126.14µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 120.359µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 164.22µs >[negroni] Completed 200 OK in 98.449µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 948.527µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 188.455µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 962.936µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 968.223µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.045225ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 969.115µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 817.136µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.082758ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 993.672µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 166.006µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 187.721µs >[negroni] Completed 200 OK in 99.309µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 180.313µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 387.927µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 830.415µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 612.448µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 555.507µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 537.881µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 584.956µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 563.405µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 535.088µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 498.526µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 137.123µs >[negroni] Completed 200 OK in 209.687µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 192.998µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 149.239µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 132.602µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 147.169µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.429823ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.012139ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.043396ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 727.016µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.040225ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.098388ms >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 678.121µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.000999ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 173.785µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 234.172µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 175.889µs >[negroni] Completed 200 OK in 132.332µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 764.317µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 812.241µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 644.092µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.74163ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 617.545µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 734.786µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 785.52µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.542711ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 280.684µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 191.819µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 137.141µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 103.843µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 566.158µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 775.7µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 669.008µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.087582ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 662.112µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 672.561µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 504.184µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 546.008µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 173.452µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 132.263µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 150.773µs >[negroni] Completed 200 OK in 221.101µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 992.382µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 685.635µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.064846ms >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.037368ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 949.833µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 759.914µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 1.514373ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.913129ms >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 200 OK in 171.43µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 198.778µs >[kubeexec] ERROR 2018/06/08 08:47:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout [Error : Request timed out >]: Stderr [] >[cmdexec] ERROR 2018/06/08 08:47:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 125.079µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 239.169µs >[kubeexec] DEBUG 2018/06/08 08:47:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 24min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ14514 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:47:51 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:47:51 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:47:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_966a92ed3e4374a5e634f7b133c49e52 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[kubeexec] ERROR 2018/06/08 08:47:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Another transaction is in progress for vol_5db55c483f6984d9917cfe4b3c8b3cbc. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:47:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Another transaction is in progress for vol_5db55c483f6984d9917cfe4b3c8b3cbc. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:47:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Another transaction is in progress for vol_5db55c483f6984d9917cfe4b3c8b3cbc. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:47:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Another transaction is in progress for vol_5db55c483f6984d9917cfe4b3c8b3cbc. Please try again after sometime. >[asynchttp] INFO 2018/06/08 08:47:51 asynchttp.go:292: Completed job c5fa6ecf4c03887ea05cf8a2e6abfbd8 in 4m1.243706021s >[heketi] ERROR 2018/06/08 08:47:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Another transaction is in progress for vol_5db55c483f6984d9917cfe4b3c8b3cbc. Please try again after sometime. >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 5.907398ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 3.63494ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 689.986µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 1.167219ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.621921ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 586.362µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 608.912µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 635.131µs >[negroni] Started GET /queue/c5fa6ecf4c03887ea05cf8a2e6abfbd8 >[negroni] Completed 500 Internal Server Error in 138.493µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 772.676µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 952.965µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 806.237µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 592.561µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 588.5µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 542.187µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 600.555µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 586.581µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 124.718µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 115.942µs >[negroni] Completed 200 OK in 248.102µs >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 08:47:52 Allocating brick set #0 >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 139.399µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 174.425µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 73.924µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 150.491µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 155.116µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 183.273µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 193.428µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 90.728µs >[negroni] Completed 200 OK in 240.724µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 167.275µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 106.245µs >[negroni] Completed 200 OK in 79.542µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 211.138µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 134.186µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 125.629µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 202.691µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 144.816µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 76.681µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 136.784µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 155.835µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 110.733µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 237.688µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 141.193µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 204.923µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 128.552µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 145.129µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 67.016µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 125.02µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 134.727µs >[negroni] Completed 200 OK in 98.239µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 138.7µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 143.029µs >[negroni] Completed 200 OK in 201.191µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 143.828µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 196.31µs >[negroni] Completed 200 OK in 200.471µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 212.384µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 242.855µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 105.711µs >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 199.769µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 230.398µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 115.12µs >[heketi] ERROR 2018/06/08 08:48:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:510: Unable to determine snapshot information from volume vol_9c85de66c12db0a72b4d16fe888ff74d: EOF >[heketi] ERROR 2018/06/08 08:48:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to determine snapshot information from volume vol_9c85de66c12db0a72b4d16fe888ff74d: EOF >[kubeexec] DEBUG 2018/06/08 08:48:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: Error : Request timed out >[kubeexec] DEBUG 2018/06/08 08:48:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 22min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ12733 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > ââ12735 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:48:06 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:48:06 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 08:48:06 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:48:06 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[negroni] Completed 202 Accepted in 13.71483546s >[asynchttp] INFO 2018/06/08 08:48:06 asynchttp.go:288: Started job a8d46afb5176808836420be86f6fdcd0 >[heketi] INFO 2018/06/08 08:48:06 Started async operation: Create Volume >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 155.827µs >[heketi] INFO 2018/06/08 08:48:06 Creating brick d9145089dd3be60b9df2a82315900670 >[heketi] INFO 2018/06/08 08:48:06 Creating brick e416f1b7fd62ee9320cfd9d57705d34c >[heketi] INFO 2018/06/08 08:48:06 Creating brick 14353da20ffb37550480dff24b915066 >[heketi] INFO 2018/06/08 08:48:06 Allocating brick set #0 >[kubeexec] DEBUG 2018/06/08 08:48:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e416f1b7fd62ee9320cfd9d57705d34c >Result: >[kubeexec] DEBUG 2018/06/08 08:48:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_e416f1b7fd62ee9320cfd9d57705d34c --virtualsize 2097152K --name brick_e416f1b7fd62ee9320cfd9d57705d34c >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_e416f1b7fd62ee9320cfd9d57705d34c" created. >[kubeexec] DEBUG 2018/06/08 08:48:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e416f1b7fd62ee9320cfd9d57705d34c >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e416f1b7fd62ee9320cfd9d57705d34c isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 99.794µs >[kubeexec] DEBUG 2018/06/08 08:48:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e416f1b7fd62ee9320cfd9d57705d34c /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e416f1b7fd62ee9320cfd9d57705d34c xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 139.788µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 104.526µs >[kubeexec] DEBUG 2018/06/08 08:48:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e416f1b7fd62ee9320cfd9d57705d34c /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e416f1b7fd62ee9320cfd9d57705d34c >Result: >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 113.825µs >[kubeexec] DEBUG 2018/06/08 08:48:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e416f1b7fd62ee9320cfd9d57705d34c/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:48:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2000 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e416f1b7fd62ee9320cfd9d57705d34c/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:48:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e416f1b7fd62ee9320cfd9d57705d34c/brick >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 116.471µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 107.47µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 75.671µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 170.129µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 153.152µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 133.491µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 86.945µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 163.249µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 230.442µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 119.724µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 78.723µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 140.105µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 123.1µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 121.244µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 73.304µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 224.765µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 202.291µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 285.351µs >[negroni] Completed 200 OK in 101.248µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 148.383µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 164.212µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 129.935µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 84.304µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 157.554µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 140.018µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 162.886µs >[negroni] Completed 200 OK in 201.331µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 168.516µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 190.381µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 276.958µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 146.745µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 174.982µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 203.658µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 235.261µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 86.974µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 145.345µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 215.355µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 251.348µs >[negroni] Completed 200 OK in 143.246µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 194.833µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 213.279µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 150.069µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 192.265µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 182.138µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 190.951µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 329.187µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 213.095µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 194.148µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 136.87µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 238.852µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 186.472µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 131.188µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 140.015µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 191.641µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 140.475µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 133.677µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 182.715µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 184.402µs >[negroni] Completed 200 OK in 128.589µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 142.246µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 117.186µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 219.701µs >[negroni] Completed 200 OK in 110.646µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 305.718µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 227.362µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 180.705µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 141.975µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 276.478µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 220.639µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 150.818µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 202.341µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 203.985µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 150.449µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 146.717µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 187.435µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 230.998µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 186.471µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 169.326µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 142.543µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 277.484µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 219.224µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 145.434µs >[negroni] Completed 200 OK in 140.199µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 186.003µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 247.994µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 196.925µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 134.782µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 129.818µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 186.698µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 181.089µs >[negroni] Completed 200 OK in 99.629µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 178.956µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 208.466µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 176.865µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 189.045µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 199.442µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 122.312µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 185.576µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 145.866µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 211.052µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 216.285µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 211.402µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 95.032µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 129.076µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 176.002µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 183.703µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 178.998µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 180.063µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 229.662µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 208.629µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 132.336µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 185.001µs >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 231.111µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 137.829µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 113.531µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 192.157µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 142.417µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 164.457µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 105.697µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 209.922µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 130.834µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 104.444µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 168.552µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 214.769µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 213.904µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 189.788µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 218.279µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 197.556µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 185.376µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 152.717µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 195.582µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 166.778µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 149.001µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 290.465µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 155.609µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 153.113µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 122.277µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 223.985µs >[negroni] Completed 200 OK in 94.126µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 127.142µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 122.283µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 217.576µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 114.496µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 150.663µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 167.46µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 192.489µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 146.026µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 215.366µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 197.955µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 184.249µs >[negroni] Completed 200 OK in 151.375µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 196.838µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 188.769µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 209.215µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 137.651µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 128.151µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 166.955µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 159.062µs >[negroni] Completed 200 OK in 155.356µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 116.718µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 201.542µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 283.014µs >[negroni] Completed 200 OK in 190.535µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 261.268µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 174.943µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 112.061µs >[negroni] Completed 200 OK in 254.146µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 181.698µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 141.926µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 157.992µs >[negroni] Completed 200 OK in 231.774µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 204.455µs >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 190.411µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 227.372µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 184.499µs >[kubeexec] ERROR 2018/06/08 08:48:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_a4a6e4892da299f6c5634b8a2def697e force] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout [Error : Request timed out >]: Stderr [] >[cmdexec] ERROR 2018/06/08 08:48:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: >[kubeexec] DEBUG 2018/06/08 08:48:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 25min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ14272 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:48:51 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:48:51 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 96.618µs >[negroni] Completed 202 Accepted in 58.843610259s >[asynchttp] INFO 2018/06/08 08:48:51 asynchttp.go:288: Started job 6158d16fba595a299f350a673e859df3 >[heketi] INFO 2018/06/08 08:48:51 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:48:51 Creating brick 30ba742d27672a5289fe7ff6bd5ef3ce >[heketi] INFO 2018/06/08 08:48:51 Creating brick 2aca658dfb3ac9ba0fbf538dad4caa3b >[heketi] INFO 2018/06/08 08:48:51 Creating brick c805e58953c8aa4c8e4d7298563713f7 >[heketi] INFO 2018/06/08 08:48:51 Allocating brick set #0 >[kubeexec] DEBUG 2018/06/08 08:48:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d9145089dd3be60b9df2a82315900670 >Result: >[kubeexec] DEBUG 2018/06/08 08:48:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_30ba742d27672a5289fe7ff6bd5ef3ce >Result: >[kubeexec] DEBUG 2018/06/08 08:48:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_d9145089dd3be60b9df2a82315900670 --virtualsize 2097152K --name brick_d9145089dd3be60b9df2a82315900670 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_d9145089dd3be60b9df2a82315900670" created. >[kubeexec] DEBUG 2018/06/08 08:48:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_30ba742d27672a5289fe7ff6bd5ef3ce --virtualsize 2097152K --name brick_30ba742d27672a5289fe7ff6bd5ef3ce >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_30ba742d27672a5289fe7ff6bd5ef3ce" created. >[kubeexec] DEBUG 2018/06/08 08:48:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d9145089dd3be60b9df2a82315900670 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d9145089dd3be60b9df2a82315900670 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:48:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_30ba742d27672a5289fe7ff6bd5ef3ce >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_30ba742d27672a5289fe7ff6bd5ef3ce isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 189.982µs >[kubeexec] DEBUG 2018/06/08 08:48:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d9145089dd3be60b9df2a82315900670 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d9145089dd3be60b9df2a82315900670 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:48:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_30ba742d27672a5289fe7ff6bd5ef3ce /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_30ba742d27672a5289fe7ff6bd5ef3ce xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 95.984µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 102.291µs >[kubeexec] DEBUG 2018/06/08 08:48:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d9145089dd3be60b9df2a82315900670 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d9145089dd3be60b9df2a82315900670 >Result: >[kubeexec] DEBUG 2018/06/08 08:48:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_30ba742d27672a5289fe7ff6bd5ef3ce /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_30ba742d27672a5289fe7ff6bd5ef3ce >Result: >[kubeexec] DEBUG 2018/06/08 08:48:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d9145089dd3be60b9df2a82315900670/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:48:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_30ba742d27672a5289fe7ff6bd5ef3ce/brick >Result: >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 101.889µs >[kubeexec] DEBUG 2018/06/08 08:48:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2000 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d9145089dd3be60b9df2a82315900670/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:48:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2001 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_30ba742d27672a5289fe7ff6bd5ef3ce/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:48:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d9145089dd3be60b9df2a82315900670/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:48:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_30ba742d27672a5289fe7ff6bd5ef3ce/brick >Result: >[kubeexec] ERROR 2018/06/08 08:48:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_a4a6e4892da299f6c5634b8a2def697e] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:48:52 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:48:52 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:48:52 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[kubeexec] DEBUG 2018/06/08 08:48:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c805e58953c8aa4c8e4d7298563713f7 >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 161.873µs >[kubeexec] DEBUG 2018/06/08 08:48:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_c805e58953c8aa4c8e4d7298563713f7 --virtualsize 2097152K --name brick_c805e58953c8aa4c8e4d7298563713f7 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_c805e58953c8aa4c8e4d7298563713f7" created. >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 169.447µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 81.457µs >[kubeexec] DEBUG 2018/06/08 08:48:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_c805e58953c8aa4c8e4d7298563713f7 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_c805e58953c8aa4c8e4d7298563713f7 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:48:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_c805e58953c8aa4c8e4d7298563713f7 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c805e58953c8aa4c8e4d7298563713f7 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 144.728µs >[kubeexec] DEBUG 2018/06/08 08:48:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_c805e58953c8aa4c8e4d7298563713f7 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c805e58953c8aa4c8e4d7298563713f7 >Result: >[kubeexec] DEBUG 2018/06/08 08:48:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c805e58953c8aa4c8e4d7298563713f7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:48:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2001 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c805e58953c8aa4c8e4d7298563713f7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:48:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c805e58953c8aa4c8e4d7298563713f7/brick >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 131.853µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 193.385µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 103.91µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 118.457µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 104.489µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 105.37µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 96.408µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 189.506µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 116.587µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 218.772µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 124.826µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 170.532µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 159.531µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 254.985µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 133.502µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 138.568µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 113.613µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 164.395µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 72.901µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 208.292µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 179.579µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 182.445µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 123.946µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 253.852µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 190.083µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 170.309µs >[negroni] Completed 200 OK in 102.908µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 207.091µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 142.213µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 145.194µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 102.374µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 153.877µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 194.657µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 120.354µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 265.949µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 120.418µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 123.112µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 166.793µs >[negroni] Completed 200 OK in 184.823µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 117.271µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 146.42µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 183.638µs >[negroni] Completed 200 OK in 167.148µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 159.823µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 174.322µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 208.134µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 149.833µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 126.26µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 166.216µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 170.67µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 153.142µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 257.035µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 131.611µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 132.596µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 134.169µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 183.166µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 134.687µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 183.636µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 183.629µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 256.724µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 191.999µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 212.134µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 222.388µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 196.758µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 204.123µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 133.951µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 224.398µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 145.021µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 215.948µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 175.299µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 113.117µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 108.276µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 257.188µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 322.678µs >[negroni] Completed 200 OK in 363.191µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 124.742µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 166.217µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 183.788µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 99.714µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 203.604µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 165.129µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 168.109µs >[negroni] Completed 200 OK in 161.573µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 143.381µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 182.692µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 165.368µs >[negroni] Completed 200 OK in 192.336µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 141.011µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 194.016µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 246.461µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 167.986µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 130.672µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 147.648µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 104.226µs >[negroni] Completed 200 OK in 295.185µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 253.978µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 295.771µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 189.828µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 142.342µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 210.049µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 199.605µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 146.346µs >[negroni] Completed 200 OK in 149.575µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 212.527µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 174.916µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 189.822µs >[negroni] Completed 200 OK in 136.242µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 189.072µs >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 238.538µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 149.151µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 135.719µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 173.931µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 137.277µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 129.422µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 94.799µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 167.427µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 166.01µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 167.073µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 141.637µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 242.145µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 213.423µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 177.472µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 127.062µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 135.838µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 226.515µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 215.077µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 143.681µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 274.938µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 180.417µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 138.17µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 83.308µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 195.649µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 243.954µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 310.38µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 225.131µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 204.004µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 266.431µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 282.885µs >[negroni] Completed 200 OK in 145.299µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 209.278µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 174.412µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 175.553µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 165.529µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 171.509µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 180.195µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 237.731µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 144.422µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 331.714µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 201.641µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 131.878µs >[negroni] Completed 200 OK in 316.548µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 126.978µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 231.181µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 299.368µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 145.485µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 199.601µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 297.446µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 272.771µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 206.085µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 184.518µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 156.317µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 218.495µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 174.109µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 281.215µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 253.908µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 159.96µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 97.784µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 214.612µs >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 153.274µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 258.268µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 150.592µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 133.383µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 129.25µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 173.978µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 156.794µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 200.39µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 166.832µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 173.792µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 128.432µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 138.712µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 150.523µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 220.44µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 80.156µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 194.158µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 176.675µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 153.322µs >[negroni] Completed 200 OK in 88.655µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 137.241µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 203.788µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 206.181µs >[negroni] Completed 200 OK in 296.607µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 185.361µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 208.166µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 138.731µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 99.751µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 261.511µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 139.352µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 146.556µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 136.919µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 356.464µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 199.812µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 158.138µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 355.428µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 145.206µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 246.722µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 240.918µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 198.711µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 281.681µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 234.794µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 133.415µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 76.564µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 137.206µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 339.994µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 211.362µs >[negroni] Completed 200 OK in 158.716µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 227.931µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 206.898µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 187.319µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 115.882µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 246.708µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 149.944µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 362.717µs >[negroni] Completed 200 OK in 181.605µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 160.969µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 224.808µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 271.925µs >[negroni] Completed 200 OK in 281.741µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 196.522µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 191.774µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 179.411µs >[negroni] Completed 200 OK in 268.378µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 143.621µs >[kubeexec] ERROR 2018/06/08 08:49:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_966a92ed3e4374a5e634f7b133c49e52 force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout [Error : Request timed out >]: Stderr [] >[cmdexec] ERROR 2018/06/08 08:49:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_966a92ed3e4374a5e634f7b133c49e52: Unable to execute command on glusterfs-storage-pg4xc: >[kubeexec] DEBUG 2018/06/08 08:49:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_14353da20ffb37550480dff24b915066 >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 126.068µs >[kubeexec] DEBUG 2018/06/08 08:49:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_14353da20ffb37550480dff24b915066 --virtualsize 2097152K --name brick_14353da20ffb37550480dff24b915066 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_14353da20ffb37550480dff24b915066" created. >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 114.65µs >[negroni] Completed 200 OK in 92.551µs >[kubeexec] DEBUG 2018/06/08 08:49:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_14353da20ffb37550480dff24b915066 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_14353da20ffb37550480dff24b915066 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:49:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_14353da20ffb37550480dff24b915066 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_14353da20ffb37550480dff24b915066 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 107.588µs >[kubeexec] DEBUG 2018/06/08 08:49:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_14353da20ffb37550480dff24b915066 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_14353da20ffb37550480dff24b915066 >Result: >[kubeexec] DEBUG 2018/06/08 08:49:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_14353da20ffb37550480dff24b915066/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:49:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2000 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_14353da20ffb37550480dff24b915066/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:49:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_14353da20ffb37550480dff24b915066/brick >Result: >[cmdexec] INFO 2018/06/08 08:49:52 Creating volume vol_ad1e5849e9566f1bcaa09cfb9c0b96ef replica 3 >[kubeexec] DEBUG 2018/06/08 08:49:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 26min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ14514 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:49:53 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:49:53 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[negroni] Completed 202 Accepted in 2m0.453436098s >[asynchttp] INFO 2018/06/08 08:49:53 asynchttp.go:288: Started job d6cbe9a27af546f3c96e74f32e86cd27 >[heketi] INFO 2018/06/08 08:49:53 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:49:53 Creating brick 1bb49691eccf328025855a91ee8cbc66 >[heketi] INFO 2018/06/08 08:49:53 Creating brick cc2483d7b49bc029b5200a024cac7535 >[heketi] INFO 2018/06/08 08:49:53 Creating brick 2ed96343627983cf9667b7cee4052d17 >[heketi] INFO 2018/06/08 08:49:53 Allocating brick set #0 >[kubeexec] DEBUG 2018/06/08 08:49:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 24min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ12733 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > ââ12735 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:49:53 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:49:53 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 08:49:53 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:49:53 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:49:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2aca658dfb3ac9ba0fbf538dad4caa3b >Result: >[negroni] Completed 202 Accepted in 2m0.552281252s >[asynchttp] INFO 2018/06/08 08:49:53 asynchttp.go:288: Started job 078bf2d1330026527a20bcfa4ebc6028 >[heketi] INFO 2018/06/08 08:49:53 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:49:53 Creating brick df5559fdf15f6372e19e4e9f8bc1f129 >[heketi] INFO 2018/06/08 08:49:53 Creating brick dad9ada287c446b0011af8dc964060e1 >[heketi] INFO 2018/06/08 08:49:53 Creating brick 3dbab6cd698d01ea7f00dbd81329643a >[heketi] INFO 2018/06/08 08:49:53 Allocating brick set #0 >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 144.863µs >[kubeexec] DEBUG 2018/06/08 08:49:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_2ed96343627983cf9667b7cee4052d17 >Result: >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 105.476µs >[negroni] Completed 200 OK in 83.354µs >[kubeexec] DEBUG 2018/06/08 08:49:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_2aca658dfb3ac9ba0fbf538dad4caa3b --virtualsize 2097152K --name brick_2aca658dfb3ac9ba0fbf538dad4caa3b >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_2aca658dfb3ac9ba0fbf538dad4caa3b" created. >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 165.668µs >[kubeexec] DEBUG 2018/06/08 08:49:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2aca658dfb3ac9ba0fbf538dad4caa3b >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2aca658dfb3ac9ba0fbf538dad4caa3b isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:49:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_2ed96343627983cf9667b7cee4052d17 --virtualsize 2097152K --name brick_2ed96343627983cf9667b7cee4052d17 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_2ed96343627983cf9667b7cee4052d17" created. >[kubeexec] DEBUG 2018/06/08 08:49:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2aca658dfb3ac9ba0fbf538dad4caa3b /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2aca658dfb3ac9ba0fbf538dad4caa3b xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:49:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_2ed96343627983cf9667b7cee4052d17 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_2ed96343627983cf9667b7cee4052d17 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started POST /volumes >[kubeexec] DEBUG 2018/06/08 08:49:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2aca658dfb3ac9ba0fbf538dad4caa3b /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2aca658dfb3ac9ba0fbf538dad4caa3b >Result: >[kubeexec] DEBUG 2018/06/08 08:49:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_2ed96343627983cf9667b7cee4052d17 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_2ed96343627983cf9667b7cee4052d17 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:49:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2aca658dfb3ac9ba0fbf538dad4caa3b/brick >Result: >[negroni] Started POST /volumes >[kubeexec] DEBUG 2018/06/08 08:49:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2001 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2aca658dfb3ac9ba0fbf538dad4caa3b/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:49:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_2ed96343627983cf9667b7cee4052d17 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_2ed96343627983cf9667b7cee4052d17 >Result: >[kubeexec] DEBUG 2018/06/08 08:49:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2aca658dfb3ac9ba0fbf538dad4caa3b/brick >Result: >[cmdexec] INFO 2018/06/08 08:49:54 Creating volume vol_15e0122e942fc41f80666a3714670682 replica 3 >[kubeexec] DEBUG 2018/06/08 08:49:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_2ed96343627983cf9667b7cee4052d17/brick >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 154.48µs >[kubeexec] DEBUG 2018/06/08 08:49:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2002 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_2ed96343627983cf9667b7cee4052d17/brick >Result: >[negroni] Started POST /volumes >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 118.576µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 76.056µs >[kubeexec] ERROR 2018/06/08 08:49:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_966a92ed3e4374a5e634f7b133c49e52] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_966a92ed3e4374a5e634f7b133c49e52: failed: Another transaction is in progress for vol_966a92ed3e4374a5e634f7b133c49e52. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:49:54 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_966a92ed3e4374a5e634f7b133c49e52: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_966a92ed3e4374a5e634f7b133c49e52: failed: Another transaction is in progress for vol_966a92ed3e4374a5e634f7b133c49e52. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:49:54 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_966a92ed3e4374a5e634f7b133c49e52: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_966a92ed3e4374a5e634f7b133c49e52: failed: Another transaction is in progress for vol_966a92ed3e4374a5e634f7b133c49e52. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:49:54 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_966a92ed3e4374a5e634f7b133c49e52: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_966a92ed3e4374a5e634f7b133c49e52: failed: Another transaction is in progress for vol_966a92ed3e4374a5e634f7b133c49e52. Please try again after sometime. >[kubeexec] DEBUG 2018/06/08 08:49:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_2ed96343627983cf9667b7cee4052d17/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:49:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_cc2483d7b49bc029b5200a024cac7535 >Result: >[kubeexec] DEBUG 2018/06/08 08:49:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_df5559fdf15f6372e19e4e9f8bc1f129 >Result: >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 111.932µs >[negroni] Started POST /volumes >[negroni] Started POST /volumes >[kubeexec] DEBUG 2018/06/08 08:49:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_cc2483d7b49bc029b5200a024cac7535 --virtualsize 2097152K --name brick_cc2483d7b49bc029b5200a024cac7535 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_cc2483d7b49bc029b5200a024cac7535" created. >[kubeexec] DEBUG 2018/06/08 08:49:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_df5559fdf15f6372e19e4e9f8bc1f129 --virtualsize 2097152K --name brick_df5559fdf15f6372e19e4e9f8bc1f129 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_df5559fdf15f6372e19e4e9f8bc1f129" created. >[kubeexec] DEBUG 2018/06/08 08:49:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_cc2483d7b49bc029b5200a024cac7535 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_cc2483d7b49bc029b5200a024cac7535 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:49:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_df5559fdf15f6372e19e4e9f8bc1f129 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_df5559fdf15f6372e19e4e9f8bc1f129 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:49:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_cc2483d7b49bc029b5200a024cac7535 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_cc2483d7b49bc029b5200a024cac7535 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started POST /volumes >[kubeexec] DEBUG 2018/06/08 08:49:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_df5559fdf15f6372e19e4e9f8bc1f129 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_df5559fdf15f6372e19e4e9f8bc1f129 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:49:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_cc2483d7b49bc029b5200a024cac7535 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_cc2483d7b49bc029b5200a024cac7535 >Result: >[kubeexec] DEBUG 2018/06/08 08:49:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_df5559fdf15f6372e19e4e9f8bc1f129 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_df5559fdf15f6372e19e4e9f8bc1f129 >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 145.839µs >[kubeexec] DEBUG 2018/06/08 08:49:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_cc2483d7b49bc029b5200a024cac7535/brick >Result: >[negroni] Started POST /volumes >[kubeexec] DEBUG 2018/06/08 08:49:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_df5559fdf15f6372e19e4e9f8bc1f129/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:49:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2002 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_cc2483d7b49bc029b5200a024cac7535/brick >Result: >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 124.92µs >[negroni] Completed 200 OK in 120.617µs >[kubeexec] DEBUG 2018/06/08 08:49:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2003 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_df5559fdf15f6372e19e4e9f8bc1f129/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:49:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_cc2483d7b49bc029b5200a024cac7535/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:49:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_df5559fdf15f6372e19e4e9f8bc1f129/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:49:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3dbab6cd698d01ea7f00dbd81329643a >Result: >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 118.397µs >[kubeexec] DEBUG 2018/06/08 08:49:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_3dbab6cd698d01ea7f00dbd81329643a --virtualsize 2097152K --name brick_3dbab6cd698d01ea7f00dbd81329643a >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_3dbab6cd698d01ea7f00dbd81329643a" created. >[kubeexec] DEBUG 2018/06/08 08:49:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_3dbab6cd698d01ea7f00dbd81329643a >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_3dbab6cd698d01ea7f00dbd81329643a isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:49:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_3dbab6cd698d01ea7f00dbd81329643a /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3dbab6cd698d01ea7f00dbd81329643a xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 1.299228ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.386327ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 880.047µs >[negroni] Started GET /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 200 OK in 627.511µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 932.96µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 926.287µs >[negroni] Started GET /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 200 OK in 572.183µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 543.119µs >[kubeexec] DEBUG 2018/06/08 08:49:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_3dbab6cd698d01ea7f00dbd81329643a /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3dbab6cd698d01ea7f00dbd81329643a >Result: >[kubeexec] DEBUG 2018/06/08 08:49:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3dbab6cd698d01ea7f00dbd81329643a/brick >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 102.499µs >[kubeexec] DEBUG 2018/06/08 08:49:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2003 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3dbab6cd698d01ea7f00dbd81329643a/brick >Result: >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 197.851µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 114.18µs >[kubeexec] DEBUG 2018/06/08 08:49:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3dbab6cd698d01ea7f00dbd81329643a/brick >Result: >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 108.489µs >[negroni] Started POST /volumes >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 127.224µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 136.614µs >[negroni] Completed 200 OK in 97.101µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 130.635µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 131.461µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 165.068µs >[negroni] Completed 200 OK in 115.288µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 129.844µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 114.063µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 285.631µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 119.808µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 132.393µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 124.375µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 121.503µs >[negroni] Completed 200 OK in 101.166µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 137.964µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 140.875µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 136.889µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 200.337µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 113.678µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 219.092µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 243.852µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 216.688µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 145.562µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 159.849µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 150.642µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 239.655µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 269.511µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 161.715µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 266.712µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 122.747µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 145.09µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 201.278µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 160.794µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 122.131µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 204.978µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 193.649µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 196.515µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 130.693µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 139.646µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 144.771µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 207.109µs >[negroni] Completed 200 OK in 89.202µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 157.795µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 192.828µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 199.608µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 107.47µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 204.682µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 213.231µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 205.997µs >[negroni] Completed 200 OK in 366.908µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 175.009µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 162.795µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 207.541µs >[negroni] Completed 200 OK in 103.037µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 156.824µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 172.545µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 242.725µs >[negroni] Completed 200 OK in 99.733µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 133.764µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 139.866µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 188.043µs >[negroni] Completed 200 OK in 147.179µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 168.722µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 219.098µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 189.528µs >[negroni] Completed 200 OK in 158.568µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 207.725µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 183.473µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 145.644µs >[negroni] Completed 200 OK in 142.742µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 140.016µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 155.777µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 223.982µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 152.345µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 268.304µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 199.822µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 194.215µs >[negroni] Completed 200 OK in 139.445µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 185.962µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 180.673µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 213.216µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 200.942µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 204.035µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 188.762µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 217.242µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 199.072µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 182.074µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 180.484µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 205.992µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 200.918µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 206.925µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 220.038µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 333.088µs >[negroni] Completed 200 OK in 163.938µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 262.468µs >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 247.578µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 151.065µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 89.95µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 131.309µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 203.819µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 206.978µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 151.19µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 214.222µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 229.851µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 195.909µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 130.445µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 181.809µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 343.511µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 247.528µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 166.062µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 181.243µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 158.815µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 276.551µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 134.648µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 186.638µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 182.509µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 232.132µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 160.949µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 381.713µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 201.159µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 308.114µs >[negroni] Completed 200 OK in 136.122µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 170.725µs >[negroni] Started POST /volumes >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 144.406µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 135.314µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 93.425µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 131.074µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 204.479µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 244.048µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 138.616µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 306.917µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 175.82µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 157.472µs >[negroni] Completed 200 OK in 260.586µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 276.451µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 213.297µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 131.937µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 87.492µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 130.517µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 188.348µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 172.976µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 107.246µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 217.452µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 166.432µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 341.581µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 143.098µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 260.105µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 250.244µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 185.411µs >[negroni] Completed 200 OK in 151.782µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 168.933µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 287.595µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 221.089µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 210.389µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 254.764µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 202.825µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 155.516µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 95.382µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 134.072µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 178.343µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 194.252µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 157.692µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 187.812µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 237.135µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 243.734µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 169.242µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 199.022µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 223.935µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 144.096µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 172.799µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 230.052µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 152.841µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 174.653µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 114.448µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 133.879µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 157.864µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 220.651µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 241.695µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 223.716µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 232.338µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 189.838µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 111.086µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 258.511µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 208.302µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 157.228µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 152.009µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 197.356µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 212.829µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 228.661µs >[negroni] Completed 200 OK in 165.402µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 182.816µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 191.809µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 191.796µs >[negroni] Completed 200 OK in 139.926µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 159.168µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 236.268µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 183.789µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 166.546µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 204.672µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 140.15µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 254.146µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 165.759µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 281.731µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 182.569µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 183.073µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 256.612µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 223.454µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 330.111µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 140.542µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 172.319µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 201.852µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 148.018µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 190.744µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 169.183µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 162.761µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 192.896µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 223.325µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 137.525µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 135.709µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 210.295µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 208.459µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 119.317µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 257.745µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 194.331µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 183.535µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 121.383µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 254.377µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 149.62µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 201.882µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 125.733µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 247.141µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 205.564µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 137.945µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 73.777µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 188.787µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 203.358µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 210.435µs >[negroni] Completed 200 OK in 251.834µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 195.325µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 193.789µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 192.562µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 150.863µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 206.701µs >[negroni] Started POST /volumes >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 194.066µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 179.959µs >[negroni] Completed 200 OK in 203.978µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 150.686µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 137.062µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 205.129µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 167.846µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 211.201µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 201.755µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 193.219µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 328.467µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 175.522µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 158.329µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 180.293µs >[negroni] Completed 200 OK in 105.035µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 227.43µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 145.297µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 130.578µs >[negroni] Completed 200 OK in 138.276µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 150.135µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 131.027µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 201.785µs >[negroni] Completed 200 OK in 160.349µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 212.255µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 191.212µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 195.59µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 166.357µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 149.291µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 157.235µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 244.358µs >[negroni] Completed 200 OK in 103.257µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 203.305µs >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 206.669µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 186.409µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 134.907µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 217.168µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 192.656µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 194.821µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 195.142µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 186.871µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 191.135µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 183.361µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 147.045µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 234.215µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 202.171µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 196.353µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 88.133µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 242.838µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 143.4µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 150.844µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 117.557µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 150.507µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 256.451µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 233.978µs >[negroni] Completed 200 OK in 269.038µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 166.562µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 200.218µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 255.888µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 569.996µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 134.955µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 200.261µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 191.955µs >[negroni] Completed 200 OK in 82.777µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 187.498µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 147.717µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 253.008µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 154.859µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 128.249µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 201.081µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 225.758µs >[negroni] Completed 200 OK in 161.992µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 343.743µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 218.071µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 148.789µs >[negroni] Completed 200 OK in 219.403µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 135.655µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 257.718µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 181.129µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 202.199µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 203.676µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 252.445µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 207.861µs >[negroni] Completed 200 OK in 160.335µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 167.361µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 204.023µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 193.788µs >[negroni] Completed 200 OK in 146.175µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 214.432µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 154.35µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 191.428µs >[negroni] Completed 200 OK in 142.679µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 291.214µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 143.498µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 274.018µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 125.853µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 271.948µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 227.951µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 131.511µs >[negroni] Completed 200 OK in 106.476µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 215.195µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 193.962µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 200.068µs >[negroni] Completed 200 OK in 164.683µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 211.755µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 179.327µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 165.592µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 172.171µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 152.413µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 152.95µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 212.486µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 156.353µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 149.562µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 208.645µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 206.775µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 202.525µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 242.285µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 233.648µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 281.245µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 134.859µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 219.688µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 123.726µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 141.944µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 122.913µs >[negroni] Started POST /volumes >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 129.992µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 191.362µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 122.255µs >[negroni] Completed 200 OK in 1.053625ms >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 207.148µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 224.021µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 208.858µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 135.126µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 189.896µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 223.188µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 129.731µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 178.035µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 178.819µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 196.712µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 160.436µs >[negroni] Completed 200 OK in 384.517µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 211.481µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 193.521µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 197.576µs >[negroni] Completed 200 OK in 296.628µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 288.157µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 192.675µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 175.284µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 145.663µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 181.499µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 296.231µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 211.488µs >[negroni] Completed 200 OK in 139.386µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 192.398µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 230.972µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 190.235µs >[negroni] Completed 200 OK in 249.171µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 205.089µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 236.455µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 226.648µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 440.58µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 177.677µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 226.445µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 149.084µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 144.922µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 195.549µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 225.155µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 188.795µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 104.616µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 128.123µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 239.132µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 125.034µs >[negroni] Completed 200 OK in 80.839µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 133.583µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 168.478µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 177.043µs >[negroni] Completed 200 OK in 183.936µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 243.805µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 224.982µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 130.013µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 298.719µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 185.776µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 197.988µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 185.555µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 102.162µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 188.578µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 183.718µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 220.715µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 124.946µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 232.618µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 193.202µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 220.414µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 234.909µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 262.624µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 253.592µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 158.039µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 263.858µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 140.119µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 129.887µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 249.181µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 189.848µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 213.655µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 203.405µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 265.978µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 215.008µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 175.691µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 207.948µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 145.147µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 136.282µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 253.711µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 145.143µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 214.699µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 289.501µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 294.584µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 328.203µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 243.911µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 124.468µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 177.561µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 149.528µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 199.155µs >[negroni] Completed 200 OK in 259.224µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 234.371µs >[kubeexec] ERROR 2018/06/08 08:51:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume create vol_ad1e5849e9566f1bcaa09cfb9c0b96ef replica 3 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d9145089dd3be60b9df2a82315900670/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_14353da20ffb37550480dff24b915066/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e416f1b7fd62ee9320cfd9d57705d34c/brick] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout [Error : Request timed out >]: Stderr [] >[kubeexec] DEBUG 2018/06/08 08:51:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_1bb49691eccf328025855a91ee8cbc66 >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 188.059µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 225.199µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 103.759µs >[kubeexec] DEBUG 2018/06/08 08:51:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_1bb49691eccf328025855a91ee8cbc66 --virtualsize 2097152K --name brick_1bb49691eccf328025855a91ee8cbc66 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_1bb49691eccf328025855a91ee8cbc66" created. >[kubeexec] DEBUG 2018/06/08 08:51:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_1bb49691eccf328025855a91ee8cbc66 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_1bb49691eccf328025855a91ee8cbc66 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 162.556µs >[kubeexec] DEBUG 2018/06/08 08:51:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_1bb49691eccf328025855a91ee8cbc66 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_1bb49691eccf328025855a91ee8cbc66 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:51:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_1bb49691eccf328025855a91ee8cbc66 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_1bb49691eccf328025855a91ee8cbc66 >Result: >[kubeexec] DEBUG 2018/06/08 08:51:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_1bb49691eccf328025855a91ee8cbc66/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:51:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2002 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_1bb49691eccf328025855a91ee8cbc66/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:51:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_1bb49691eccf328025855a91ee8cbc66/brick >Result: >[cmdexec] INFO 2018/06/08 08:51:54 Creating volume vol_9f9fb17746d0aad637b132875d2744e5 replica 3 >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 146.5µs >[kubeexec] DEBUG 2018/06/08 08:51:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 28min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ14272 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:51:54 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:51:54 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[negroni] Completed 202 Accepted in 4m1.895668064s >[asynchttp] INFO 2018/06/08 08:51:54 asynchttp.go:288: Started job d3134ebfac4ef9bd8dbf824bd7bf873a >[heketi] INFO 2018/06/08 08:51:54 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:51:54 Creating brick c6935061d1c9dbb05f316e3d87080a38 >[heketi] INFO 2018/06/08 08:51:54 Creating brick f4fd75475848eb3cf3c5f985495d64f3 >[heketi] INFO 2018/06/08 08:51:54 Creating brick 5a17b5c320a4f4efb036a3001a05544b >[heketi] INFO 2018/06/08 08:51:54 Allocating brick set #0 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 114.506µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 300.812µs >[kubeexec] DEBUG 2018/06/08 08:51:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dad9ada287c446b0011af8dc964060e1 >Result: >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 189.42µs >[kubeexec] DEBUG 2018/06/08 08:51:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_dad9ada287c446b0011af8dc964060e1 --virtualsize 2097152K --name brick_dad9ada287c446b0011af8dc964060e1 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_dad9ada287c446b0011af8dc964060e1" created. >[kubeexec] DEBUG 2018/06/08 08:51:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dad9ada287c446b0011af8dc964060e1 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dad9ada287c446b0011af8dc964060e1 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:51:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dad9ada287c446b0011af8dc964060e1 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dad9ada287c446b0011af8dc964060e1 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:51:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dad9ada287c446b0011af8dc964060e1 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dad9ada287c446b0011af8dc964060e1 >Result: >[kubeexec] DEBUG 2018/06/08 08:51:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dad9ada287c446b0011af8dc964060e1/brick >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 150.178µs >[kubeexec] DEBUG 2018/06/08 08:51:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2003 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dad9ada287c446b0011af8dc964060e1/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:51:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dad9ada287c446b0011af8dc964060e1/brick >Result: >[cmdexec] INFO 2018/06/08 08:51:55 Creating volume vol_b99532640a5201d243193159ee762ae4 replica 3 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 101.944µs >[negroni] Completed 200 OK in 64.643µs >[kubeexec] ERROR 2018/06/08 08:51:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume create vol_15e0122e942fc41f80666a3714670682 replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_30ba742d27672a5289fe7ff6bd5ef3ce/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c805e58953c8aa4c8e4d7298563713f7/brick 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2aca658dfb3ac9ba0fbf538dad4caa3b/brick] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout [Error : Request timed out >]: Stderr [] >[kubeexec] ERROR 2018/06/08 08:51:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_ad1e5849e9566f1bcaa09cfb9c0b96ef force] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_ad1e5849e9566f1bcaa09cfb9c0b96ef: failed: Another transaction is in progress for vol_ad1e5849e9566f1bcaa09cfb9c0b96ef. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:51:55 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_ad1e5849e9566f1bcaa09cfb9c0b96ef: Unable to execute command on glusterfs-storage-vsh2m: volume stop: vol_ad1e5849e9566f1bcaa09cfb9c0b96ef: failed: Another transaction is in progress for vol_ad1e5849e9566f1bcaa09cfb9c0b96ef. Please try again after sometime. >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 111.793µs >[kubeexec] DEBUG 2018/06/08 08:51:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f4fd75475848eb3cf3c5f985495d64f3 >Result: >[kubeexec] DEBUG 2018/06/08 08:51:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_f4fd75475848eb3cf3c5f985495d64f3 --virtualsize 2097152K --name brick_f4fd75475848eb3cf3c5f985495d64f3 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_f4fd75475848eb3cf3c5f985495d64f3" created. >[kubeexec] DEBUG 2018/06/08 08:51:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_f4fd75475848eb3cf3c5f985495d64f3 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_f4fd75475848eb3cf3c5f985495d64f3 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:51:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_f4fd75475848eb3cf3c5f985495d64f3 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f4fd75475848eb3cf3c5f985495d64f3 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:51:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_f4fd75475848eb3cf3c5f985495d64f3 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f4fd75475848eb3cf3c5f985495d64f3 >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 117.725µs >[kubeexec] DEBUG 2018/06/08 08:51:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f4fd75475848eb3cf3c5f985495d64f3/brick >Result: >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 137.185µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 70.448µs >[kubeexec] DEBUG 2018/06/08 08:51:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2004 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f4fd75475848eb3cf3c5f985495d64f3/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:51:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f4fd75475848eb3cf3c5f985495d64f3/brick >Result: >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 112.727µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 149.542µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 136.177µs >[negroni] Completed 200 OK in 98.126µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 109.317µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 181.523µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 130.809µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 99.73µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 180.331µs >[negroni] Started POST /volumes >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 108.609µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 169.406µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 125.169µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 195.254µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 181.302µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 138.738µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 129.143µs >[kubeexec] DEBUG 2018/06/08 08:52:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38 >Result: >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 135.642µs >[kubeexec] DEBUG 2018/06/08 08:52:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_c6935061d1c9dbb05f316e3d87080a38 --virtualsize 2097152K --name brick_c6935061d1c9dbb05f316e3d87080a38 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_c6935061d1c9dbb05f316e3d87080a38" created. >[kubeexec] DEBUG 2018/06/08 08:52:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:52:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 111.224µs >[kubeexec] DEBUG 2018/06/08 08:52:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38 >Result: >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 120.692µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 90.544µs >[kubeexec] DEBUG 2018/06/08 08:52:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:52:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2004 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick >Result: >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 135.658µs >[kubeexec] DEBUG 2018/06/08 08:52:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 144.345µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 165.147µs >[negroni] Completed 200 OK in 184.125µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 126.69µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 200.922µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 129.112µs >[negroni] Completed 200 OK in 76.161µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 123.844µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 126.507µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 150.084µs >[negroni] Completed 200 OK in 75.173µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 111.054µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 185.359µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 248.748µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 137.136µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 150.465µs >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 195.592µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 229.303µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 97.168µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 192.973µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 215.616µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 241.182µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 122.733µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 192.461µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 173.807µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 184.634µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 157.943µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 154.27µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 266.731µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 260.124µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 133.762µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 204.081µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 271.971µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 195.412µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 137.229µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 231.044µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 198.078µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 184.035µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 106.5µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 168.019µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 186.639µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 199.322µs >[negroni] Completed 200 OK in 173.768µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 243.591µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 148.229µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 193.809µs >[negroni] Completed 200 OK in 260.725µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 142.809µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 221.228µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 162.908µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 132.759µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 195.238µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 191.758µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 219.221µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 153.935µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 188.355µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 214.799µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 181.029µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 309.211µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 263.408µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 205.175µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 193.279µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 137.896µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 206.835µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 136.468µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 137.415µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 211.015µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 195.044µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 310.435µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 248.805µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 167.072µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 211.908µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 191.342µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 148.178µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 226.029µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 164.897µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 197.166µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 167.435µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 130.942µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 130.206µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 151.172µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 183.324µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 152.642µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 153.291µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 199.018µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 227.028µs >[negroni] Completed 200 OK in 141.699µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 233.475µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 263.651µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 202.898µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 142.366µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 185.509µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 198.289µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 171.455µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 144.519µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 199.358µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 164.597µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 219.264µs >[negroni] Completed 200 OK in 137.893µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 150.877µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 200.432µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 204.065µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 144.579µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 169.772µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 196.835µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 254.321µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 123.392µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 145.724µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 4.629369ms >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 191.692µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 177.339µs >[negroni] Completed 200 OK in 118.771µs >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 267.888µs >[negroni] Started DELETE /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 223.778µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 145.328µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 178.429µs >[negroni] Completed 200 OK in 150.525µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 218.575µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 137.19µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 121.994µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 94.584µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 206.593µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 168.272µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 169.172µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 419.83µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 173.089µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 260.541µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 149.074µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 99.133µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 148.425µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 187.058µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 151.374µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 69.724µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 175.352µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 239.72µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 208.592µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 251.915µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 142.303µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 153.813µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 155.849µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 111.575µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 215.503µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 184.456µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 201.805µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 138.139µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 178.032µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 185.416µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 224.767µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 97.335µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 199.068µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 200.766µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 173.826µs >[negroni] Completed 200 OK in 188.155µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 197.392µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 120.44µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 132.562µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 133.533µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 144.329µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 175.922µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 134.011µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 100.853µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 120.067µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 119.443µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 204.871µs >[negroni] Completed 200 OK in 104.079µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 154.433µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 144.888µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 193.171µs >[negroni] Completed 200 OK in 93.766µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 161.809µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 189.286µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 275.752µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 225.935µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 194.622µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 125.001µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 110.227µs >[negroni] Completed 200 OK in 190.389µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 219.382µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 109.917µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 165.169µs >[negroni] Completed 200 OK in 168.139µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 148.938µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 208.092µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 154.603µs >[negroni] Completed 200 OK in 293.168µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 320.392µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 234.515µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 180.136µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 100.483µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 177.015µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 133.628µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 101.488µs >[negroni] Completed 200 OK in 176.866µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 129.917µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 244.424µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 135.658µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 153.712µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 214.924µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 177.873µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 113.551µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 79.043µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 140.582µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 161.293µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 232.033µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 211.018µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 193.625µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 217.06µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 160.026µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 157.762µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 192.502µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 149.174µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 170.371µs >[negroni] Completed 200 OK in 228.889µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 196.772µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 186.339µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 117.023µs >[negroni] Completed 200 OK in 299.042µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 149.308µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 177.316µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 225.285µs >[negroni] Completed 200 OK in 205.198µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 213.521µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 141.047µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 205.706µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 131.56µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 336.887µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 200.322µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 246.782µs >[negroni] Completed 200 OK in 101.044µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 233.011µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 210.815µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 204.119µs >[negroni] Completed 200 OK in 145.529µs >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 309.708µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 118.982µs >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 369.888µs >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 1.534306ms >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 5.3361ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 3.529888ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 2.998794ms >[negroni] Started DELETE /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[heketi] ERROR 2018/06/08 08:53:00 /src/github.com/heketi/heketi/apps/glusterfs/app_node.go:241: Unable to delete node [278bd6b4e16a8e62ef15aaae22e6abc1] because it contains devices >[negroni] Completed 409 Conflict in 637.888µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 158.535µs >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 161.091µs >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 2.511085ms >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 189.939µs >[negroni] Completed 200 OK in 91.001µs >[negroni] Started POST /volumes >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 176.309µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 169.983µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 199.416µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 171.871µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 151.649µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 183.129µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 266.655µs >[negroni] Completed 200 OK in 426.826µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 263.254µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 152.246µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 212.839µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 94.387µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 200.716µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 121.244µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 186.465µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 486.266µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 139.523µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 219.381µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 312.86µs >[negroni] Completed 200 OK in 136.44µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 315.448µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 284.401µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 193.203µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 141.099µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 206.616µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 135.778µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 142.575µs >[negroni] Completed 200 OK in 106.615µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 194.525µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 195.931µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 228.948µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 134.026µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 191.203µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 208.682µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 124.499µs >[negroni] Completed 200 OK in 78.433µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 138.394µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 189.249µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 133.234µs >[negroni] Completed 200 OK in 118.624µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 182.789µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 172.592µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 169.312µs >[negroni] Completed 200 OK in 145.509µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 119.146µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 179.237µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 189.969µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 170.678µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 233.082µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 140.844µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 220.7µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 190.52µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 130.048µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 200.749µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 230.522µs >[negroni] Completed 200 OK in 163.719µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 165.332µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 193.078µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 250.172µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 89.01µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 245.911µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 149.988µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 195.793µs >[negroni] Completed 200 OK in 168.544µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 250.812µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 247.072µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 232.621µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 189.226µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 175.327µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 149.117µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 257.118µs >[negroni] Completed 200 OK in 146.916µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 177.207µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 190.376µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 128.442µs >[negroni] Completed 200 OK in 273.434µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 123.775µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 171.211µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 230.411µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 135.446µs >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 193.545µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 114.323µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 139.633µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 117.512µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 146.006µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 149.994µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 307.202µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 196.432µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 173.762µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 199.894µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 192.882µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 151.749µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 185.651µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 243.171µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 191.621µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 170.986µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 176.919µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 216.835µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 187.129µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 140.782µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 154.249µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 259.465µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 230.488µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 129.562µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 216.058µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 250.272µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 279.697µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 181.568µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 181.268µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 266.399µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 188.951µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 176.348µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 189.142µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 222.362µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 220.728µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 199.605µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 189.656µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 199.576µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 284.005µs >[negroni] Completed 200 OK in 98.077µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 142.811µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 127.572µs >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 209.278µs >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 1.576409ms >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 127.545µs >[negroni] Completed 200 OK in 82.001µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 1.490437ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 1.849942ms >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 132.64µs >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 1.681931ms >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 174.042µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 183.978µs >[negroni] Completed 200 OK in 272µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 300.881µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 156.86µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 152.922µs >[negroni] Completed 200 OK in 238.913µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 278.276µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 288.878µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 240.871µs >[negroni] Completed 200 OK in 103.787µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 146.959µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 314.444µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 164.553µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 254.214µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 208.875µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 261.011µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 236.898µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 228.448µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 198.179µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 167.922µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 184.026µs >[negroni] Completed 200 OK in 219.706µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 234.4µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 146.574µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 171.976µs >[negroni] Completed 200 OK in 131.566µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 252.108µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 213.623µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 135.786µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 161.212µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 220.719µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 179.888µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 141.397µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 168.112µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 193.392µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 374.133µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 187.524µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 152.322µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 200 OK in 214.752µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 187.252µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 142.595µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 120.653µs >[negroni] Started GET /queue/a8d46afb5176808836420be86f6fdcd0 >[negroni] Completed 401 Unauthorized in 135.43µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 252.895µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 236.851µs >[negroni] Completed 200 OK in 127.913µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 204.956µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 136.291µs >[negroni] Completed 200 OK in 124.726µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 139.038µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 237.525µs >[negroni] Completed 200 OK in 154.932µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 162.292µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 208.122µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 203.585µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 186.485µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 172.828µs >[negroni] Completed 200 OK in 175.176µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 249.86µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 143.579µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 92.984µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 204.612µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 143.516µs >[negroni] Completed 200 OK in 124.198µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 213.661µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 190.928µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 153.579µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 213.106µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 186.802µs >[negroni] Completed 200 OK in 143.239µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 192.886µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 201.125µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 134.249µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 165.416µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 278.384µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 182.418µs >[kubeexec] ERROR 2018/06/08 08:53:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume create vol_9f9fb17746d0aad637b132875d2744e5 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_cc2483d7b49bc029b5200a024cac7535/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_2ed96343627983cf9667b7cee4052d17/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_1bb49691eccf328025855a91ee8cbc66/brick] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout [Error : Request timed out >]: Stderr [] >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 124.736µs >[kubeexec] DEBUG 2018/06/08 08:53:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 30min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ14514 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:53:54 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:53:54 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[negroni] Completed 202 Accepted in 6m2.125589789s >[asynchttp] INFO 2018/06/08 08:53:54 asynchttp.go:288: Started job 53ae2da32657329fa5a30bfda1414335 >[heketi] INFO 2018/06/08 08:53:54 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:53:54 Creating brick 079040d8947a2eda08d5ddbdc41bfa83 >[heketi] INFO 2018/06/08 08:53:54 Creating brick ac396b39a38b350b3672e904f47f449e >[heketi] INFO 2018/06/08 08:53:54 Creating brick 615bca09219fdb7b02ec4e0a6a62d4f7 >[heketi] INFO 2018/06/08 08:53:54 Allocating brick set #0 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 136.194µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 106.942µs >[kubeexec] DEBUG 2018/06/08 08:53:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5a17b5c320a4f4efb036a3001a05544b >Result: >[kubeexec] DEBUG 2018/06/08 08:53:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_5a17b5c320a4f4efb036a3001a05544b --virtualsize 2097152K --name brick_5a17b5c320a4f4efb036a3001a05544b >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_5a17b5c320a4f4efb036a3001a05544b" created. >[kubeexec] DEBUG 2018/06/08 08:53:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_5a17b5c320a4f4efb036a3001a05544b >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_5a17b5c320a4f4efb036a3001a05544b isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:53:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_5a17b5c320a4f4efb036a3001a05544b /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5a17b5c320a4f4efb036a3001a05544b xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:53:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_5a17b5c320a4f4efb036a3001a05544b /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5a17b5c320a4f4efb036a3001a05544b >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 99.509µs >[kubeexec] DEBUG 2018/06/08 08:53:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5a17b5c320a4f4efb036a3001a05544b/brick >Result: >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 149.642µs >[negroni] Completed 200 OK in 95.895µs >[kubeexec] DEBUG 2018/06/08 08:53:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2004 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5a17b5c320a4f4efb036a3001a05544b/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:53:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5a17b5c320a4f4efb036a3001a05544b/brick >Result: >[cmdexec] INFO 2018/06/08 08:53:55 Creating volume vol_bf31af76ef6c54e7e8f24f4d8711cb22 replica 3 >[kubeexec] ERROR 2018/06/08 08:53:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9f9fb17746d0aad637b132875d2744e5 force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9f9fb17746d0aad637b132875d2744e5: failed: Another transaction is in progress for vol_9f9fb17746d0aad637b132875d2744e5. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:53:56 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9f9fb17746d0aad637b132875d2744e5: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_9f9fb17746d0aad637b132875d2744e5: failed: Another transaction is in progress for vol_9f9fb17746d0aad637b132875d2744e5. Please try again after sometime. >[kubeexec] DEBUG 2018/06/08 08:53:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ac396b39a38b350b3672e904f47f449e >Result: >[kubeexec] DEBUG 2018/06/08 08:53:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_ac396b39a38b350b3672e904f47f449e --virtualsize 2097152K --name brick_ac396b39a38b350b3672e904f47f449e >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_ac396b39a38b350b3672e904f47f449e" created. >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 240.494µs >[kubeexec] DEBUG 2018/06/08 08:53:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_ac396b39a38b350b3672e904f47f449e >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_ac396b39a38b350b3672e904f47f449e isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 177.518µs >[negroni] Completed 200 OK in 128.902µs >[kubeexec] DEBUG 2018/06/08 08:53:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_ac396b39a38b350b3672e904f47f449e /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ac396b39a38b350b3672e904f47f449e xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] ERROR 2018/06/08 08:53:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume create vol_b99532640a5201d243193159ee762ae4 replica 3 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dad9ada287c446b0011af8dc964060e1/brick 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3dbab6cd698d01ea7f00dbd81329643a/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_df5559fdf15f6372e19e4e9f8bc1f129/brick] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout [Error : Request timed out >]: Stderr [] >[kubeexec] DEBUG 2018/06/08 08:53:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_ac396b39a38b350b3672e904f47f449e /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ac396b39a38b350b3672e904f47f449e >Result: >[kubeexec] DEBUG 2018/06/08 08:53:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ac396b39a38b350b3672e904f47f449e/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:53:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2005 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ac396b39a38b350b3672e904f47f449e/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:53:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ac396b39a38b350b3672e904f47f449e/brick >Result: >[kubeexec] ERROR 2018/06/08 08:53:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9f9fb17746d0aad637b132875d2744e5] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9f9fb17746d0aad637b132875d2744e5: failed: Another transaction is in progress for vol_9f9fb17746d0aad637b132875d2744e5. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:53:57 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9f9fb17746d0aad637b132875d2744e5: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_9f9fb17746d0aad637b132875d2744e5: failed: Another transaction is in progress for vol_9f9fb17746d0aad637b132875d2744e5. Please try again after sometime. >[heketi] ERROR 2018/06/08 08:53:57 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:177: Error executing create volume: Unable to execute command on glusterfs-storage-pg4xc: >[heketi] WARNING 2018/06/08 08:53:57 Create Volume Exec requested retry >[heketi] INFO 2018/06/08 08:53:57 Retry Create Volume (1) >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 180.286µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 170.771µs >[negroni] Completed 200 OK in 88.561µs >[kubeexec] ERROR 2018/06/08 08:53:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_15e0122e942fc41f80666a3714670682 force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_15e0122e942fc41f80666a3714670682: failed: Another transaction is in progress for vol_15e0122e942fc41f80666a3714670682. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:53:58 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_15e0122e942fc41f80666a3714670682: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_15e0122e942fc41f80666a3714670682: failed: Another transaction is in progress for vol_15e0122e942fc41f80666a3714670682. Please try again after sometime. >[kubeexec] DEBUG 2018/06/08 08:53:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 28min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13533 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > ââ13534 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > ââ13541 /usr/bin/python /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post --volname=vol_9c85de66c12db0a72b4d16fe888ff74d > ââ13543 gluster system:: getwd > ââ13545 /usr/bin/python /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py --version > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 08:53:58 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:53:58 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 08:53:58 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:53:58 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 185.659µs >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 2.55109ms >[negroni] Completed 202 Accepted in 6m5.812084946s >[asynchttp] INFO 2018/06/08 08:53:58 asynchttp.go:288: Started job c2300af895526bf82435fd82aaea0ce8 >[heketi] INFO 2018/06/08 08:53:58 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:53:58 Creating brick 35ab4a65e06fcf26cf320c522f9afc92 >[heketi] INFO 2018/06/08 08:53:58 Creating brick 6f7534e6df0ad44d669269ef560a18ed >[heketi] INFO 2018/06/08 08:53:58 Creating brick e0e48c633e08f573cc3b70b99f47faea >[heketi] INFO 2018/06/08 08:53:58 Allocating brick set #0 >[kubeexec] DEBUG 2018/06/08 08:53:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_615bca09219fdb7b02ec4e0a6a62d4f7 >Result: >[kubeexec] DEBUG 2018/06/08 08:53:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_35ab4a65e06fcf26cf320c522f9afc92 >Result: >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 9.444592ms >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 125.728µs >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 2.774778ms >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 117.279µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 97.881µs >[kubeexec] DEBUG 2018/06/08 08:53:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_615bca09219fdb7b02ec4e0a6a62d4f7 --virtualsize 2097152K --name brick_615bca09219fdb7b02ec4e0a6a62d4f7 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_615bca09219fdb7b02ec4e0a6a62d4f7" created. >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 1.50699ms >[kubeexec] DEBUG 2018/06/08 08:53:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_35ab4a65e06fcf26cf320c522f9afc92 --virtualsize 2097152K --name brick_35ab4a65e06fcf26cf320c522f9afc92 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_35ab4a65e06fcf26cf320c522f9afc92" created. >[kubeexec] DEBUG 2018/06/08 08:53:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_615bca09219fdb7b02ec4e0a6a62d4f7 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_615bca09219fdb7b02ec4e0a6a62d4f7 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:53:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_35ab4a65e06fcf26cf320c522f9afc92 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_35ab4a65e06fcf26cf320c522f9afc92 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:53:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_615bca09219fdb7b02ec4e0a6a62d4f7 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_615bca09219fdb7b02ec4e0a6a62d4f7 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:53:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_35ab4a65e06fcf26cf320c522f9afc92 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_35ab4a65e06fcf26cf320c522f9afc92 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:53:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_615bca09219fdb7b02ec4e0a6a62d4f7 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_615bca09219fdb7b02ec4e0a6a62d4f7 >Result: >[kubeexec] DEBUG 2018/06/08 08:53:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_35ab4a65e06fcf26cf320c522f9afc92 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_35ab4a65e06fcf26cf320c522f9afc92 >Result: >[kubeexec] DEBUG 2018/06/08 08:53:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_615bca09219fdb7b02ec4e0a6a62d4f7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:53:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_35ab4a65e06fcf26cf320c522f9afc92/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:53:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2005 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_615bca09219fdb7b02ec4e0a6a62d4f7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:53:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2006 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_35ab4a65e06fcf26cf320c522f9afc92/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:53:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_615bca09219fdb7b02ec4e0a6a62d4f7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:53:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_35ab4a65e06fcf26cf320c522f9afc92/brick >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 123.075µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 116.861µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 59.127µs >[heketi] WARNING 2018/06/08 08:54:00 Create Volume Exec requested retry >[heketi] INFO 2018/06/08 08:54:00 Retry Create Volume (1) >[kubeexec] ERROR 2018/06/08 08:54:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_ad1e5849e9566f1bcaa09cfb9c0b96ef] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_ad1e5849e9566f1bcaa09cfb9c0b96ef: failed: Locking failed on 10.70.47.76. Please check log file for details. >] >[cmdexec] ERROR 2018/06/08 08:54:00 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_ad1e5849e9566f1bcaa09cfb9c0b96ef: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_ad1e5849e9566f1bcaa09cfb9c0b96ef: failed: Locking failed on 10.70.47.76. Please check log file for details. >[heketi] ERROR 2018/06/08 08:54:00 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:177: Error executing create volume: Unable to execute command on glusterfs-storage-vsh2m: >[kubeexec] DEBUG 2018/06/08 08:54:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_079040d8947a2eda08d5ddbdc41bfa83 >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 125.764µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 110.675µs >[negroni] Completed 200 OK in 72.483µs >[kubeexec] DEBUG 2018/06/08 08:54:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_079040d8947a2eda08d5ddbdc41bfa83 --virtualsize 2097152K --name brick_079040d8947a2eda08d5ddbdc41bfa83 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_079040d8947a2eda08d5ddbdc41bfa83" created. >[kubeexec] DEBUG 2018/06/08 08:54:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_079040d8947a2eda08d5ddbdc41bfa83 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_079040d8947a2eda08d5ddbdc41bfa83 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_079040d8947a2eda08d5ddbdc41bfa83 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_079040d8947a2eda08d5ddbdc41bfa83 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_079040d8947a2eda08d5ddbdc41bfa83 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_079040d8947a2eda08d5ddbdc41bfa83 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_079040d8947a2eda08d5ddbdc41bfa83/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2005 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_079040d8947a2eda08d5ddbdc41bfa83/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_079040d8947a2eda08d5ddbdc41bfa83/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:01 Creating volume vol_cc8a686464bb4017c91ba7294ee1b091 replica 3 >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 168.411µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 223.931µs >[negroni] Completed 200 OK in 121.67µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 125.467µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 122.242µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 301.545µs >[kubeexec] ERROR 2018/06/08 08:54:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_b99532640a5201d243193159ee762ae4 force] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_b99532640a5201d243193159ee762ae4: failed: Another transaction is in progress for vol_b99532640a5201d243193159ee762ae4. Please try again after sometime. >] >[cmdexec] ERROR 2018/06/08 08:54:03 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_b99532640a5201d243193159ee762ae4: Unable to execute command on glusterfs-storage-vsh2m: volume stop: vol_b99532640a5201d243193159ee762ae4: failed: Another transaction is in progress for vol_b99532640a5201d243193159ee762ae4. Please try again after sometime. >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 115.044µs >[kubeexec] DEBUG 2018/06/08 08:54:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_bf31af76ef6c54e7e8f24f4d8711cb22 replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f4fd75475848eb3cf3c5f985495d64f3/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_5a17b5c320a4f4efb036a3001a05544b/brick >Result: volume create: vol_bf31af76ef6c54e7e8f24f4d8711cb22: success: please start the volume to access data >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 110.137µs >[negroni] Completed 200 OK in 75.373µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 118.054µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 106.669µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 67.57µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 159.813µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 212.159µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 110.882µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 197.856µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 265.108µs >[negroni] Completed 200 OK in 302.648µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 122.731µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 121.474µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 200 OK in 93.195µs >[kubeexec] DEBUG 2018/06/08 08:54:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume stop vol_9f9fb17746d0aad637b132875d2744e5 force >Result: volume stop: vol_9f9fb17746d0aad637b132875d2744e5: success >[kubeexec] DEBUG 2018/06/08 08:54:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 30min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ15453 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > ââ15454 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:54:08 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:54:08 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[negroni] Completed 202 Accepted in 6m15.226372505s >[asynchttp] INFO 2018/06/08 08:54:08 asynchttp.go:288: Started job 252b1a3974a7d2a66ea36f37f2c82074 >[heketi] INFO 2018/06/08 08:54:08 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:08 Creating brick 063b0d12615d3375614df036642dfa39 >[heketi] INFO 2018/06/08 08:54:08 Creating brick 6bfb8b3bea31f51d5be26acc7e81ce9a >[heketi] INFO 2018/06/08 08:54:08 Creating brick 4dc091ed7622300b60597d7a49fae798 >[negroni] Completed 202 Accepted in 6m2.241897404s >[asynchttp] INFO 2018/06/08 08:54:08 asynchttp.go:288: Started job 20e932645b6c4b3853e7a46917e686e7 >[heketi] INFO 2018/06/08 08:54:08 Started async operation: Delete Volume >[asynchttp] INFO 2018/06/08 08:54:08 asynchttp.go:292: Completed job b7642ac5823fb25c80d83504800b9c18 in 8m2.242006294s >[heketi] ERROR 2018/06/08 08:54:08 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to determine snapshot information from volume vol_9c85de66c12db0a72b4d16fe888ff74d: EOF >[heketi] INFO 2018/06/08 08:54:08 Allocating brick set #0 >[kubeexec] DEBUG 2018/06/08 08:54:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 30min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ15640 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > ââ15641 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:54:08 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:54:08 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:54:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_6f7534e6df0ad44d669269ef560a18ed >Result: >[negroni] Completed 202 Accepted in 5m32.357531985s >[asynchttp] INFO 2018/06/08 08:54:08 asynchttp.go:288: Started job d5734675939ff78001cf479cb6e9b97c >[heketi] INFO 2018/06/08 08:54:08 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:08 Creating brick cb9dab914105ca3d691abed1d53d7df9 >[heketi] INFO 2018/06/08 08:54:08 Creating brick cd0ffc6bd88bdb1d969dceaafda517d8 >[heketi] INFO 2018/06/08 08:54:08 Creating brick 47b16a0d2bab878899e523ab33a0c258 >[heketi] INFO 2018/06/08 08:54:08 Allocating brick set #0 >[kubeexec] DEBUG 2018/06/08 08:54:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_063b0d12615d3375614df036642dfa39 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_6f7534e6df0ad44d669269ef560a18ed --virtualsize 2097152K --name brick_6f7534e6df0ad44d669269ef560a18ed >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_6f7534e6df0ad44d669269ef560a18ed" created. >[kubeexec] DEBUG 2018/06/08 08:54:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_6f7534e6df0ad44d669269ef560a18ed >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_6f7534e6df0ad44d669269ef560a18ed isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_063b0d12615d3375614df036642dfa39 --virtualsize 2097152K --name brick_063b0d12615d3375614df036642dfa39 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_063b0d12615d3375614df036642dfa39" created. >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 148.067µs >[kubeexec] DEBUG 2018/06/08 08:54:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_6f7534e6df0ad44d669269ef560a18ed /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_6f7534e6df0ad44d669269ef560a18ed xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 182.59µs >[negroni] Started GET /queue/b7642ac5823fb25c80d83504800b9c18 >[negroni] Completed 500 Internal Server Error in 129.978µs >[kubeexec] DEBUG 2018/06/08 08:54:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_063b0d12615d3375614df036642dfa39 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_063b0d12615d3375614df036642dfa39 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_063b0d12615d3375614df036642dfa39 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_063b0d12615d3375614df036642dfa39 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_6f7534e6df0ad44d669269ef560a18ed /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_6f7534e6df0ad44d669269ef560a18ed >Result: >[kubeexec] DEBUG 2018/06/08 08:54:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_6f7534e6df0ad44d669269ef560a18ed/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_063b0d12615d3375614df036642dfa39 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_063b0d12615d3375614df036642dfa39 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2006 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_6f7534e6df0ad44d669269ef560a18ed/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_063b0d12615d3375614df036642dfa39/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_6f7534e6df0ad44d669269ef560a18ed/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2007 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_063b0d12615d3375614df036642dfa39/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_063b0d12615d3375614df036642dfa39/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[kubeexec] DEBUG 2018/06/08 08:54:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_47b16a0d2bab878899e523ab33a0c258 >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 100.983µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 118.199µs >[kubeexec] DEBUG 2018/06/08 08:54:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_47b16a0d2bab878899e523ab33a0c258 --virtualsize 2097152K --name brick_47b16a0d2bab878899e523ab33a0c258 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_47b16a0d2bab878899e523ab33a0c258" created. >[kubeexec] DEBUG 2018/06/08 08:54:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_47b16a0d2bab878899e523ab33a0c258 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_47b16a0d2bab878899e523ab33a0c258 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_47b16a0d2bab878899e523ab33a0c258 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_47b16a0d2bab878899e523ab33a0c258 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_47b16a0d2bab878899e523ab33a0c258 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_47b16a0d2bab878899e523ab33a0c258 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_47b16a0d2bab878899e523ab33a0c258/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2001 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_47b16a0d2bab878899e523ab33a0c258/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_47b16a0d2bab878899e523ab33a0c258/brick >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 131.883µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 106.242µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 130.844µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 184.878µs >[kubeexec] DEBUG 2018/06/08 08:54:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_bf31af76ef6c54e7e8f24f4d8711cb22 >Result: volume start: vol_bf31af76ef6c54e7e8f24f4d8711cb22: success >[kubeexec] DEBUG 2018/06/08 08:54:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume create vol_cc8a686464bb4017c91ba7294ee1b091 replica 3 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_079040d8947a2eda08d5ddbdc41bfa83/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_ac396b39a38b350b3672e904f47f449e/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_615bca09219fdb7b02ec4e0a6a62d4f7/brick >Result: volume create: vol_cc8a686464bb4017c91ba7294ee1b091: success: please start the volume to access data >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 139.314µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 252.798µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 142.751µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 135.588µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 173.966µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 136.43µs >[kubeexec] DEBUG 2018/06/08 08:54:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force >Result: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: success >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 148.256µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 155.035µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 140.326µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 155.409µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 146.566µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 192.778µs >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 148.335µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 202.201µs >[kubeexec] DEBUG 2018/06/08 08:54:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume delete vol_15e0122e942fc41f80666a3714670682 >Result: volume delete: vol_15e0122e942fc41f80666a3714670682: success >[heketi] WARNING 2018/06/08 08:54:19 Create Volume Exec requested retry >[heketi] INFO 2018/06/08 08:54:19 Retry Create Volume (1) >[heketi] ERROR 2018/06/08 08:54:19 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:177: Error executing create volume: Unable to execute command on glusterfs-storage-gxp7c: >[kubeexec] DEBUG 2018/06/08 08:54:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume start vol_cc8a686464bb4017c91ba7294ee1b091 >Result: volume start: vol_cc8a686464bb4017c91ba7294ee1b091: success >[kubeexec] DEBUG 2018/06/08 08:54:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc >Result: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: success >[heketi] INFO 2018/06/08 08:54:19 Deleting brick 214eb9006f9103530a1d0310d5f5dcfc >[heketi] INFO 2018/06/08 08:54:19 Deleting brick 4f4d753d298c99eac492c32006c74484 >[heketi] INFO 2018/06/08 08:54:19 Deleting brick f05028077b974ebc1f6621aee2184169 >[kubeexec] DEBUG 2018/06/08 08:54:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e0e48c633e08f573cc3b70b99f47faea >Result: >[kubeexec] DEBUG 2018/06/08 08:54:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_214eb9006f9103530a1d0310d5f5dcfc | cut -d" " -f1 >Result: /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_214eb9006f9103530a1d0310d5f5dcfc >[heketi] ERROR 2018/06/08 08:54:19 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:177: Error executing create volume: Unable to execute command on glusterfs-storage-vsh2m: >[kubeexec] DEBUG 2018/06/08 08:54:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume delete vol_b99532640a5201d243193159ee762ae4 >Result: volume delete: vol_b99532640a5201d243193159ee762ae4: success >[heketi] WARNING 2018/06/08 08:54:19 Create Volume Exec requested retry >[heketi] INFO 2018/06/08 08:54:19 Retry Create Volume (1) >[kubeexec] DEBUG 2018/06/08 08:54:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_214eb9006f9103530a1d0310d5f5dcfc > >Result: vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_214eb9006f9103530a1d0310d5f5dcfc >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 172.747µs >[kubeexec] DEBUG 2018/06/08 08:54:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_214eb9006f9103530a1d0310d5f5dcfc >Result: >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 117.711µs >[kubeexec] DEBUG 2018/06/08 08:54:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume delete vol_9f9fb17746d0aad637b132875d2744e5 >Result: volume delete: vol_9f9fb17746d0aad637b132875d2744e5: success >[heketi] INFO 2018/06/08 08:54:19 Deleting brick cc2483d7b49bc029b5200a024cac7535 >[heketi] INFO 2018/06/08 08:54:19 Deleting brick 2ed96343627983cf9667b7cee4052d17 >[heketi] INFO 2018/06/08 08:54:19 Deleting brick 1bb49691eccf328025855a91ee8cbc66 >[kubeexec] DEBUG 2018/06/08 08:54:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_e0e48c633e08f573cc3b70b99f47faea --virtualsize 2097152K --name brick_e0e48c633e08f573cc3b70b99f47faea >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_e0e48c633e08f573cc3b70b99f47faea" created. >[kubeexec] DEBUG 2018/06/08 08:54:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_e0e48c633e08f573cc3b70b99f47faea >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_e0e48c633e08f573cc3b70b99f47faea isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_e0e48c633e08f573cc3b70b99f47faea /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e0e48c633e08f573cc3b70b99f47faea xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_e0e48c633e08f573cc3b70b99f47faea /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e0e48c633e08f573cc3b70b99f47faea >Result: >[kubeexec] DEBUG 2018/06/08 08:54:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_214eb9006f9103530a1d0310d5f5dcfc/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:54:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e0e48c633e08f573cc3b70b99f47faea/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2006 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e0e48c633e08f573cc3b70b99f47faea/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e0e48c633e08f573cc3b70b99f47faea/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:20 Creating volume vol_2bf097c60bd8b38bfcb4327727ca5681 replica 3 >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 115.086µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 152.277µs >[kubeexec] DEBUG 2018/06/08 08:54:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_cc2483d7b49bc029b5200a024cac7535 | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_cc2483d7b49bc029b5200a024cac7535 >[kubeexec] DEBUG 2018/06/08 08:54:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4dc091ed7622300b60597d7a49fae798 >Result: >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 158.285µs >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 109.615µs >[kubeexec] DEBUG 2018/06/08 08:54:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_214eb9006f9103530a1d0310d5f5dcfc > >Result: Logical volume "brick_214eb9006f9103530a1d0310d5f5dcfc" successfully removed >[kubeexec] DEBUG 2018/06/08 08:54:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_4dc091ed7622300b60597d7a49fae798 --virtualsize 2097152K --name brick_4dc091ed7622300b60597d7a49fae798 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_4dc091ed7622300b60597d7a49fae798" created. >[kubeexec] DEBUG 2018/06/08 08:54:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4dc091ed7622300b60597d7a49fae798 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4dc091ed7622300b60597d7a49fae798 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4dc091ed7622300b60597d7a49fae798 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4dc091ed7622300b60597d7a49fae798 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4dc091ed7622300b60597d7a49fae798 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4dc091ed7622300b60597d7a49fae798 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4dc091ed7622300b60597d7a49fae798/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2007 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4dc091ed7622300b60597d7a49fae798/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4dc091ed7622300b60597d7a49fae798/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 29min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ13858 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 08:54:22 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:54:22 Cleaned 0 nodes from health cache >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 200 OK in 131.894µs >[negroni] Completed 202 Accepted in 5m46.911880344s >[asynchttp] INFO 2018/06/08 08:54:22 asynchttp.go:288: Started job 086aebe6792ba08c072791f3d9745777 >[heketi] INFO 2018/06/08 08:54:22 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:22 Creating brick c4423da2a93ab3c5917479262cf1d93a >[heketi] INFO 2018/06/08 08:54:22 Creating brick 91d54591c1b07bc3a93d925cfbbd0f23 >[heketi] INFO 2018/06/08 08:54:22 Creating brick e3ebda9db9a085114fe9732ab2df4869 >[heketi] INFO 2018/06/08 08:54:22 Allocating brick set #0 >[negroni] Completed 202 Accepted in 5m46.981032141s >[asynchttp] INFO 2018/06/08 08:54:22 asynchttp.go:288: Started job 12c7a0bf9d0844701651eb57cc4883ef >[heketi] INFO 2018/06/08 08:54:22 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:22 Creating brick aec48b6cbce6be7256a2b86efac6aef6 >[heketi] INFO 2018/06/08 08:54:22 Creating brick abd8495bfd886d541a449967f7bf70c0 >[heketi] INFO 2018/06/08 08:54:22 Creating brick 26379a9db79930ad00a37453b492ebac >[heketi] INFO 2018/06/08 08:54:22 Allocating brick set #0 >[negroni] Completed 202 Accepted in 5m47.004154685s >[asynchttp] INFO 2018/06/08 08:54:22 asynchttp.go:288: Started job 7e56f206e49c37c88b54444c74b73010 >[heketi] INFO 2018/06/08 08:54:22 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:22 Creating brick 24c8f78999c94f4eacb087ac1fc563d1 >[heketi] INFO 2018/06/08 08:54:22 Creating brick c77746b3040810cf2ae10fc585c67fd5 >[heketi] INFO 2018/06/08 08:54:22 Creating brick 6c8f55013bee289cd516ef97746721ff >[heketi] INFO 2018/06/08 08:54:22 Allocating brick set #0 >[negroni] Completed 202 Accepted in 5m47.023976505s >[asynchttp] INFO 2018/06/08 08:54:22 asynchttp.go:288: Started job 963dc2b509a04cdd5db1a8531a35c01c >[heketi] INFO 2018/06/08 08:54:22 Started async operation: Create Volume >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 200 OK in 87.904µs >[heketi] INFO 2018/06/08 08:54:22 Creating brick cdc8c0d24f68026f06c17385d2fd0029 >[heketi] INFO 2018/06/08 08:54:22 Creating brick 9be3d8f4748b4c09541f8a9de90c4c34 >[heketi] INFO 2018/06/08 08:54:22 Creating brick cecca27e94262fbaae81106a8cfeb0b2 >[heketi] INFO 2018/06/08 08:54:22 Allocating brick set #0 >[negroni] Completed 202 Accepted in 5m47.043182703s >[asynchttp] INFO 2018/06/08 08:54:22 asynchttp.go:288: Started job d4f0d63988cfe91663765df353eac613 >[heketi] INFO 2018/06/08 08:54:22 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:22 Creating brick d985a824b73813069566b0a71bb73269 >[heketi] INFO 2018/06/08 08:54:22 Creating brick dddf81e8d43f425cc26134166b21f15e >[heketi] INFO 2018/06/08 08:54:22 Creating brick ac27bcfa769ceba759b678d9e6b751cb >[heketi] INFO 2018/06/08 08:54:22 Allocating brick set #0 >[negroni] Completed 202 Accepted in 5m47.062216544s >[asynchttp] INFO 2018/06/08 08:54:22 asynchttp.go:288: Started job 0f488f564115ab81bd476de5a18002e4 >[heketi] INFO 2018/06/08 08:54:22 Started async operation: Create Volume >[heketi] ERROR 2018/06/08 08:54:22 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:399: Pending brick 214eb9006f9103530a1d0310d5f5dcfc can not be deleted >[heketi] INFO 2018/06/08 08:54:22 Creating brick d8a382627ef730cbe0664d9439fd6a59 >[heketi] INFO 2018/06/08 08:54:22 Creating brick bd4fcfec1db80add56b57e18bd681378 >[heketi] ERROR 2018/06/08 08:54:22 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1167: Delete Volume Build Failed: The target exists, contains other items, or is in use. >[heketi] INFO 2018/06/08 08:54:22 Creating brick 0a07a24688dfe13d4e1726b8d05e24a7 >[negroni] Completed 500 Internal Server Error in 5m32.077688655s >[asynchttp] INFO 2018/06/08 08:54:22 asynchttp.go:292: Completed job 548c6fbe203bb93f9a4ecb56f976926d in 7m32.07842258s >[heketi] ERROR 2018/06/08 08:54:22 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_a4a6e4892da299f6c5634b8a2def697e: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: failed: Another transaction is in progress for vol_a4a6e4892da299f6c5634b8a2def697e. Please try again after sometime. >[heketi] INFO 2018/06/08 08:54:22 Allocating brick set #0 >[negroni] Completed 202 Accepted in 5m2.105699533s >[asynchttp] INFO 2018/06/08 08:54:22 asynchttp.go:288: Started job 44bb2a0e030ba450c0c64afe4b039c77 >[heketi] INFO 2018/06/08 08:54:22 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:22 Creating brick 8d39164e3ebba9d10c8b6ed33dafabc7 >[heketi] INFO 2018/06/08 08:54:22 Creating brick c740e50db586c869e27205aee2820dd7 >[heketi] INFO 2018/06/08 08:54:22 Creating brick f88d16c3bc0200c12aa9db817d4d9f0a >[heketi] INFO 2018/06/08 08:54:22 Allocating brick set #0 >[negroni] Completed 202 Accepted in 5m2.127609089s >[asynchttp] INFO 2018/06/08 08:54:22 asynchttp.go:288: Started job addeaaa11160c5240019710f9906a045 >[heketi] INFO 2018/06/08 08:54:22 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:22 Creating brick b35881161d7bf25bfa4af1bea4746d58 >[heketi] INFO 2018/06/08 08:54:22 Creating brick 4fd98fc9e1c8a606b4b5c70bb555597e >[heketi] INFO 2018/06/08 08:54:22 Creating brick a84d4290cbebd984dd11b88eb806d93b >[heketi] INFO 2018/06/08 08:54:22 Allocating brick set #0 >[kubeexec] DEBUG 2018/06/08 08:54:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_cd0ffc6bd88bdb1d969dceaafda517d8 >Result: >[negroni] Completed 202 Accepted in 5m2.152129648s >[asynchttp] INFO 2018/06/08 08:54:22 asynchttp.go:288: Started job e437810dbd1e27db60605a0a68708d27 >[heketi] INFO 2018/06/08 08:54:22 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:22 Creating brick 2bba219914cb29c5994021b1731b7763 >[heketi] INFO 2018/06/08 08:54:22 Creating brick bf4ebc8a0c81383f3dc357f04e4903c1 >[heketi] INFO 2018/06/08 08:54:22 Creating brick 8546c963be0cfb030d57435e94a50d61 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 5m2.182011023s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job 3addf5576b966f87a33743f58885b381 >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:23 Creating brick 6167bec47d9258cf5b2a5e3a48c0d391 >[heketi] INFO 2018/06/08 08:54:23 Creating brick 6319c734fbf95bbbfb1156b24dcc43ea >[heketi] INFO 2018/06/08 08:54:23 Creating brick b2827681c0e1655380b4d5d53226bf7e >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 5m2.21540069s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job 9a79c05867577fc3b0a49e88846f7d5d >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:23 Creating brick 9963c5d95158f58c8fc2f28d81d925ea >[heketi] INFO 2018/06/08 08:54:23 Creating brick 322eeb9ae02c4f9e338567566394d2c7 >[heketi] INFO 2018/06/08 08:54:23 Creating brick a1150dfd2cad12bad69ef9cc830da83d >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 5m2.247836708s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job 07364f7e79063d4523983c360e41ce2a >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:23 Creating brick 211fca2005300a58cf3b464b89e9deea >[heketi] INFO 2018/06/08 08:54:23 Creating brick 4d4160003ee228514a59c2c61932ceed >[heketi] INFO 2018/06/08 08:54:23 Creating brick 3ca7a79bfe1ee21131ef20732eac93d7 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 5m2.283145242s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job be6e7eb7be5d4288a84770ab21a780d4 >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Create Volume >[heketi] ERROR 2018/06/08 08:54:23 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:399: Pending brick 214eb9006f9103530a1d0310d5f5dcfc can not be deleted >[heketi] INFO 2018/06/08 08:54:23 Creating brick 30eb22cad2c3536d381b8d71943cffd2 >[heketi] INFO 2018/06/08 08:54:23 Creating brick df153caad54b6511504d5af0cdc4e2b5 >[heketi] INFO 2018/06/08 08:54:23 Creating brick 54258f15bd3df0dfb8421196bdea9774 >[negroni] Completed 500 Internal Server Error in 4m47.301233838s >[heketi] ERROR 2018/06/08 08:54:23 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1167: Delete Volume Build Failed: The target exists, contains other items, or is in use. >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 4m29.35565232s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job cba4a2a4640269f19e7dad88f97689c6 >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:23 Creating brick 02dbc9628a1cd8a5f20d9fea7a615dcc >[heketi] INFO 2018/06/08 08:54:23 Creating brick a69c00298f148d2208ea7eab0903a3d2 >[heketi] INFO 2018/06/08 08:54:23 Creating brick e1709c1db8499b885817e5f4682c56ca >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 4m29.176267251s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job feaab31cecfc86532b5845f95cf63913 >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:23 Creating brick b4e462e2cc3dbccfd86e44f907dc7f00 >[heketi] INFO 2018/06/08 08:54:23 Creating brick 2a97d96e94e0ff7c49d2b6d81dfbd8fd >[heketi] INFO 2018/06/08 08:54:23 Creating brick dde70bf2e86fac7df0e0427af9bf5db3 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 4m28.969538535s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job 85bc22f147332f528f993604639f4b29 >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:23 Creating brick d1e261647e36a4dd323508df5b3decff >[heketi] INFO 2018/06/08 08:54:23 Creating brick da658ca7b4961a716b886b57a67ff000 >[heketi] INFO 2018/06/08 08:54:23 Creating brick 906b174d0f419aa3d1a9affe4675674a >[heketi] ERROR 2018/06/08 08:54:23 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_966a92ed3e4374a5e634f7b133c49e52: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_966a92ed3e4374a5e634f7b133c49e52: failed: Another transaction is in progress for vol_966a92ed3e4374a5e634f7b133c49e52. Please try again after sometime. >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:292: Completed job 9abffd27d4b48c8f2f0e699632a69408 in 8m17.439460258s >[kubeexec] DEBUG 2018/06/08 08:54:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_cd0ffc6bd88bdb1d969dceaafda517d8 --virtualsize 2097152K --name brick_cd0ffc6bd88bdb1d969dceaafda517d8 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_cd0ffc6bd88bdb1d969dceaafda517d8" created. >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 4m28.766518796s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job b6fa7484737d1ad70ee715a59b9488ee >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:23 Creating brick 0a39774f01ac7959661ff5afff2595ab >[heketi] INFO 2018/06/08 08:54:23 Creating brick 592a78c302b01c3ee9538269481405e7 >[heketi] INFO 2018/06/08 08:54:23 Creating brick 11a59fea08d9b732822c2669ddaf54fa >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 4m28.613555187s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job 2d92a9e5a1719ab40f22667eb62b0d09 >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:23 Creating brick b3918aae89c9aab7c5cfa3496f95936a >[heketi] INFO 2018/06/08 08:54:23 Creating brick dba8827a8c1b642d1b34bc5cf35aa4b4 >[heketi] INFO 2018/06/08 08:54:23 Creating brick c8dd6b39719a6bc75ea331fae5a92396 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 4m28.4586346s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job 5a4d86d3903a860d822a06fd74343b52 >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:23 Creating brick 9f2b3abdf7a755d6b95c412c440a955f >[heketi] INFO 2018/06/08 08:54:23 Creating brick 01b4fda3de4a6a9df5de0de9826ad1aa >[heketi] INFO 2018/06/08 08:54:23 Creating brick e54cabdc56fef4e5b4a11c7a72eaff3d >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 4m28.278139136s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job edf46a25868eb4962cc099193bc77737 >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Create Volume >[kubeexec] DEBUG 2018/06/08 08:54:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_cd0ffc6bd88bdb1d969dceaafda517d8 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_cd0ffc6bd88bdb1d969dceaafda517d8 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[heketi] INFO 2018/06/08 08:54:23 Creating brick e2e58ed3bab4af0c07b035a7306264ab >[heketi] INFO 2018/06/08 08:54:23 Creating brick f399afe93703292ed5ac22602d678d6f >[heketi] INFO 2018/06/08 08:54:23 Creating brick 68704fc0eb854fe2a0157b0261302792 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 4m26.777086971s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job 01bc6bf544472a829302d04d4df8b0d1 >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Create Volume >[heketi] ERROR 2018/06/08 08:54:23 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:399: Pending brick 214eb9006f9103530a1d0310d5f5dcfc can not be deleted >[heketi] ERROR 2018/06/08 08:54:23 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1167: Delete Volume Build Failed: The target exists, contains other items, or is in use. >[heketi] INFO 2018/06/08 08:54:23 Creating brick d105708cfed6c89ba77c7f9738020bf4 >[heketi] INFO 2018/06/08 08:54:23 Creating brick ad542929ce4fd8719fbe5fc44df98dbd >[heketi] INFO 2018/06/08 08:54:23 Creating brick 4f44e68452214ab813c4616ad12e2ee2 >[negroni] Completed 500 Internal Server Error in 4m2.718155106s >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #2 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #3 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #2 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #3 >[heketi] ERROR 2018/06/08 08:54:23 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1167: Create Volume Build Failed: No space >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #4 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #5 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #6 >[negroni] Completed 500 Internal Server Error in 3m55.963669042s >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #2 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #3 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #1 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #2 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #3 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #4 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #5 >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #6 >[heketi] ERROR 2018/06/08 08:54:23 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1167: Create Volume Build Failed: No space >[negroni] Completed 500 Internal Server Error in 3m25.837382141s >[heketi] ERROR 2018/06/08 08:54:23 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:399: Pending brick 214eb9006f9103530a1d0310d5f5dcfc can not be deleted >[heketi] ERROR 2018/06/08 08:54:23 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1167: Delete Volume Build Failed: The target exists, contains other items, or is in use. >[negroni] Completed 500 Internal Server Error in 3m17.769667715s >[kubeexec] DEBUG 2018/06/08 08:54:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_cd0ffc6bd88bdb1d969dceaafda517d8 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_cd0ffc6bd88bdb1d969dceaafda517d8 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 2m54.9643561s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job a85d28ab7c8834b5c41247d658312b8f >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Create Volume >[heketi] INFO 2018/06/08 08:54:23 Creating brick fcac0ae8bd9895d4780d78b18d3d6c38 >[heketi] INFO 2018/06/08 08:54:23 Creating brick c36c6529173a51e7b9ae7a98545d98a2 >[heketi] INFO 2018/06/08 08:54:23 Creating brick 1039ed81d7d98ea183e9c7d8d00c1b6d >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 2m24.28656674s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job 1b65bda5683c59bd435d17b7a832a860 >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Create Volume >[heketi] ERROR 2018/06/08 08:54:23 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:399: Pending brick 214eb9006f9103530a1d0310d5f5dcfc can not be deleted >[heketi] INFO 2018/06/08 08:54:23 Creating brick e1c7bf2add37671886c54c55f98f9fb7 >[heketi] ERROR 2018/06/08 08:54:23 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1167: Delete Volume Build Failed: The target exists, contains other items, or is in use. >[negroni] Completed 500 Internal Server Error in 2m17.823202408s >[heketi] INFO 2018/06/08 08:54:23 Creating brick f26cbb3df4880b3fe991ee4e44697c2c >[heketi] INFO 2018/06/08 08:54:23 Creating brick 1f46651d33f6ef49271d6d5382a7bc9c >[heketi] WARNING 2018/06/08 08:54:23 Unable to delete cluster [0a73c60efdd4673113b668afea101e6d] because it contains volumes and/or nodes >[negroni] Completed 409 Conflict in 1m53.881721054s >[heketi] INFO 2018/06/08 08:54:23 Allocating brick set #0 >[negroni] Completed 202 Accepted in 1m22.896263777s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job fe707ace9f825f3ad19b3b8f28410ae4 >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Create Volume >[heketi] ERROR 2018/06/08 08:54:23 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:399: Pending brick 214eb9006f9103530a1d0310d5f5dcfc can not be deleted >[heketi] ERROR 2018/06/08 08:54:23 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1167: Delete Volume Build Failed: The target exists, contains other items, or is in use. >[negroni] Completed 500 Internal Server Error in 1m2.857919329s >[heketi] INFO 2018/06/08 08:54:23 Creating brick e631f0bf543f6c06867077cd16aad9e2 >[heketi] INFO 2018/06/08 08:54:23 Creating brick 04daa5c9d25bc1a3074533508d73b587 >[heketi] INFO 2018/06/08 08:54:23 Creating brick 61815093958a51df17cf62e0a12a5451 >[heketi] INFO 2018/06/08 08:54:23 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:292: Completed job d3134ebfac4ef9bd8dbf824bd7bf873a in 2m29.176160341s >[heketi] INFO 2018/06/08 08:54:23 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:292: Completed job 53ae2da32657329fa5a30bfda1414335 in 28.933993184s >[negroni] Started GET /queue/548c6fbe203bb93f9a4ecb56f976926d >[negroni] Completed 500 Internal Server Error in 105.1µs >[negroni] Completed 202 Accepted in 2.892508234s >[asynchttp] INFO 2018/06/08 08:54:23 asynchttp.go:288: Started job 35a61a66b97e9b1b5514e032177179ad >[heketi] INFO 2018/06/08 08:54:23 Started async operation: Delete Volume >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 85.236µs >[kubeexec] DEBUG 2018/06/08 08:54:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_cd0ffc6bd88bdb1d969dceaafda517d8 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_cd0ffc6bd88bdb1d969dceaafda517d8 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_cd0ffc6bd88bdb1d969dceaafda517d8/brick >Result: >[negroni] Started GET /queue/9abffd27d4b48c8f2f0e699632a69408 >[negroni] Completed 500 Internal Server Error in 92.158µs >[kubeexec] DEBUG 2018/06/08 08:54:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2001 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_cd0ffc6bd88bdb1d969dceaafda517d8/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_cd0ffc6bd88bdb1d969dceaafda517d8/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4f4d753d298c99eac492c32006c74484 | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4f4d753d298c99eac492c32006c74484 >[kubeexec] DEBUG 2018/06/08 08:54:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_2ed96343627983cf9667b7cee4052d17 | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_2ed96343627983cf9667b7cee4052d17 >[kubeexec] DEBUG 2018/06/08 08:54:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e3ebda9db9a085114fe9732ab2df4869 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_e3ebda9db9a085114fe9732ab2df4869 --virtualsize 2097152K --name brick_e3ebda9db9a085114fe9732ab2df4869 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_e3ebda9db9a085114fe9732ab2df4869" created. >[kubeexec] DEBUG 2018/06/08 08:54:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_e3ebda9db9a085114fe9732ab2df4869 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_e3ebda9db9a085114fe9732ab2df4869 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 110.439µs >[kubeexec] DEBUG 2018/06/08 08:54:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_e3ebda9db9a085114fe9732ab2df4869 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e3ebda9db9a085114fe9732ab2df4869 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume stop vol_ad1e5849e9566f1bcaa09cfb9c0b96ef force >Result: volume stop: vol_ad1e5849e9566f1bcaa09cfb9c0b96ef: success >[kubeexec] DEBUG 2018/06/08 08:54:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_2bf097c60bd8b38bfcb4327727ca5681 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_35ab4a65e06fcf26cf320c522f9afc92/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e0e48c633e08f573cc3b70b99f47faea/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_6f7534e6df0ad44d669269ef560a18ed/brick >Result: volume create: vol_2bf097c60bd8b38bfcb4327727ca5681: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:54:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_e3ebda9db9a085114fe9732ab2df4869 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e3ebda9db9a085114fe9732ab2df4869 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6bfb8b3bea31f51d5be26acc7e81ce9a >Result: >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e3ebda9db9a085114fe9732ab2df4869/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2002 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e3ebda9db9a085114fe9732ab2df4869/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e3ebda9db9a085114fe9732ab2df4869/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_6bfb8b3bea31f51d5be26acc7e81ce9a --virtualsize 2097152K --name brick_6bfb8b3bea31f51d5be26acc7e81ce9a >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_6bfb8b3bea31f51d5be26acc7e81ce9a" created. >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_abd8495bfd886d541a449967f7bf70c0 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_6bfb8b3bea31f51d5be26acc7e81ce9a >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_6bfb8b3bea31f51d5be26acc7e81ce9a isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_6bfb8b3bea31f51d5be26acc7e81ce9a /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6bfb8b3bea31f51d5be26acc7e81ce9a xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_abd8495bfd886d541a449967f7bf70c0 --virtualsize 2097152K --name brick_abd8495bfd886d541a449967f7bf70c0 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_abd8495bfd886d541a449967f7bf70c0" created. >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_6bfb8b3bea31f51d5be26acc7e81ce9a /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6bfb8b3bea31f51d5be26acc7e81ce9a >Result: >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6bfb8b3bea31f51d5be26acc7e81ce9a/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_abd8495bfd886d541a449967f7bf70c0 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_abd8495bfd886d541a449967f7bf70c0 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 236.457µs >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2007 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6bfb8b3bea31f51d5be26acc7e81ce9a/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_abd8495bfd886d541a449967f7bf70c0 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_abd8495bfd886d541a449967f7bf70c0 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6bfb8b3bea31f51d5be26acc7e81ce9a/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:25 Creating volume vol_07ef5105131fa51c35a9007ee213ea7a replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_abd8495bfd886d541a449967f7bf70c0 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_abd8495bfd886d541a449967f7bf70c0 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_cb9dab914105ca3d691abed1d53d7df9 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_abd8495bfd886d541a449967f7bf70c0/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2003 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_abd8495bfd886d541a449967f7bf70c0/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_abd8495bfd886d541a449967f7bf70c0/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_cb9dab914105ca3d691abed1d53d7df9 --virtualsize 2097152K --name brick_cb9dab914105ca3d691abed1d53d7df9 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_cb9dab914105ca3d691abed1d53d7df9" created. >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_24c8f78999c94f4eacb087ac1fc563d1 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_cb9dab914105ca3d691abed1d53d7df9 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_cb9dab914105ca3d691abed1d53d7df9 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_cb9dab914105ca3d691abed1d53d7df9 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_cb9dab914105ca3d691abed1d53d7df9 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_cb9dab914105ca3d691abed1d53d7df9 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_cb9dab914105ca3d691abed1d53d7df9 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_24c8f78999c94f4eacb087ac1fc563d1 --virtualsize 2097152K --name brick_24c8f78999c94f4eacb087ac1fc563d1 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_24c8f78999c94f4eacb087ac1fc563d1" created. >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_cb9dab914105ca3d691abed1d53d7df9/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_24c8f78999c94f4eacb087ac1fc563d1 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_24c8f78999c94f4eacb087ac1fc563d1 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 107.453µs >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2001 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_cb9dab914105ca3d691abed1d53d7df9/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_24c8f78999c94f4eacb087ac1fc563d1 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_24c8f78999c94f4eacb087ac1fc563d1 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_cb9dab914105ca3d691abed1d53d7df9/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:26 Creating volume vol_a6be87754541710c38b420381c76fb8c replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_f05028077b974ebc1f6621aee2184169 | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_f05028077b974ebc1f6621aee2184169 >[kubeexec] DEBUG 2018/06/08 08:54:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_24c8f78999c94f4eacb087ac1fc563d1 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_24c8f78999c94f4eacb087ac1fc563d1 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_1bb49691eccf328025855a91ee8cbc66 | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_1bb49691eccf328025855a91ee8cbc66 >[kubeexec] DEBUG 2018/06/08 08:54:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_24c8f78999c94f4eacb087ac1fc563d1/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2004 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_24c8f78999c94f4eacb087ac1fc563d1/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_24c8f78999c94f4eacb087ac1fc563d1/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_9be3d8f4748b4c09541f8a9de90c4c34 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_9be3d8f4748b4c09541f8a9de90c4c34 --virtualsize 2097152K --name brick_9be3d8f4748b4c09541f8a9de90c4c34 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_9be3d8f4748b4c09541f8a9de90c4c34" created. >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 122.633µs >[kubeexec] DEBUG 2018/06/08 08:54:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_9be3d8f4748b4c09541f8a9de90c4c34 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_9be3d8f4748b4c09541f8a9de90c4c34 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_9be3d8f4748b4c09541f8a9de90c4c34 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_9be3d8f4748b4c09541f8a9de90c4c34 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_9be3d8f4748b4c09541f8a9de90c4c34 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_9be3d8f4748b4c09541f8a9de90c4c34 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_9be3d8f4748b4c09541f8a9de90c4c34/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2005 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_9be3d8f4748b4c09541f8a9de90c4c34/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_9be3d8f4748b4c09541f8a9de90c4c34/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_ac27bcfa769ceba759b678d9e6b751cb >Result: >[kubeexec] DEBUG 2018/06/08 08:54:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_ac27bcfa769ceba759b678d9e6b751cb --virtualsize 2097152K --name brick_ac27bcfa769ceba759b678d9e6b751cb >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_ac27bcfa769ceba759b678d9e6b751cb" created. >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 140.129µs >[kubeexec] DEBUG 2018/06/08 08:54:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_ac27bcfa769ceba759b678d9e6b751cb >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_ac27bcfa769ceba759b678d9e6b751cb isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_ac27bcfa769ceba759b678d9e6b751cb /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_ac27bcfa769ceba759b678d9e6b751cb xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_ac27bcfa769ceba759b678d9e6b751cb /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_ac27bcfa769ceba759b678d9e6b751cb >Result: >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_ac27bcfa769ceba759b678d9e6b751cb/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_2bf097c60bd8b38bfcb4327727ca5681 >Result: volume start: vol_2bf097c60bd8b38bfcb4327727ca5681: success >[kubeexec] ERROR 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_b99532640a5201d243193159ee762ae4 force] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_b99532640a5201d243193159ee762ae4: failed: Volume vol_b99532640a5201d243193159ee762ae4 does not exist >] >[cmdexec] ERROR 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_b99532640a5201d243193159ee762ae4: Unable to execute command on glusterfs-storage-vsh2m: volume stop: vol_b99532640a5201d243193159ee762ae4: failed: Volume vol_b99532640a5201d243193159ee762ae4 does not exist >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2007 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_ac27bcfa769ceba759b678d9e6b751cb/brick >Result: >[heketi] INFO 2018/06/08 08:54:29 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:54:29 asynchttp.go:292: Completed job c2300af895526bf82435fd82aaea0ce8 in 30.638297718s >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_ac27bcfa769ceba759b678d9e6b751cb/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_c4423da2a93ab3c5917479262cf1d93a >Result: >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_cc2483d7b49bc029b5200a024cac7535 > >Result: vg_3a4297677881963e3f80124971d50eea/tp_cc2483d7b49bc029b5200a024cac7535 >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_d8a382627ef730cbe0664d9439fd6a59 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_214eb9006f9103530a1d0310d5f5dcfc > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_c4423da2a93ab3c5917479262cf1d93a --virtualsize 2097152K --name brick_c4423da2a93ab3c5917479262cf1d93a >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_c4423da2a93ab3c5917479262cf1d93a" created. >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_d8a382627ef730cbe0664d9439fd6a59 --virtualsize 2097152K --name brick_d8a382627ef730cbe0664d9439fd6a59 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_d8a382627ef730cbe0664d9439fd6a59" created. >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_c4423da2a93ab3c5917479262cf1d93a >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_c4423da2a93ab3c5917479262cf1d93a isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 103.993µs >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_c4423da2a93ab3c5917479262cf1d93a /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_c4423da2a93ab3c5917479262cf1d93a xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_d8a382627ef730cbe0664d9439fd6a59 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_d8a382627ef730cbe0664d9439fd6a59 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_d8a382627ef730cbe0664d9439fd6a59 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_d8a382627ef730cbe0664d9439fd6a59 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_c4423da2a93ab3c5917479262cf1d93a /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_c4423da2a93ab3c5917479262cf1d93a >Result: >[kubeexec] ERROR 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_15e0122e942fc41f80666a3714670682 force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_15e0122e942fc41f80666a3714670682: failed: Volume vol_15e0122e942fc41f80666a3714670682 does not exist >] >[cmdexec] ERROR 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_15e0122e942fc41f80666a3714670682: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_15e0122e942fc41f80666a3714670682: failed: Volume vol_15e0122e942fc41f80666a3714670682 does not exist >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_c4423da2a93ab3c5917479262cf1d93a/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_d8a382627ef730cbe0664d9439fd6a59 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_d8a382627ef730cbe0664d9439fd6a59 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2002 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_c4423da2a93ab3c5917479262cf1d93a/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_d8a382627ef730cbe0664d9439fd6a59/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_c4423da2a93ab3c5917479262cf1d93a/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2006 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_d8a382627ef730cbe0664d9439fd6a59/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_91d54591c1b07bc3a93d925cfbbd0f23 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_d8a382627ef730cbe0664d9439fd6a59/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_91d54591c1b07bc3a93d925cfbbd0f23 --virtualsize 2097152K --name brick_91d54591c1b07bc3a93d925cfbbd0f23 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_91d54591c1b07bc3a93d925cfbbd0f23" created. >[kubeexec] DEBUG 2018/06/08 08:54:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_91d54591c1b07bc3a93d925cfbbd0f23 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_91d54591c1b07bc3a93d925cfbbd0f23 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_91d54591c1b07bc3a93d925cfbbd0f23 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_91d54591c1b07bc3a93d925cfbbd0f23 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 100.416µs >[kubeexec] DEBUG 2018/06/08 08:54:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_aec48b6cbce6be7256a2b86efac6aef6 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_91d54591c1b07bc3a93d925cfbbd0f23 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_91d54591c1b07bc3a93d925cfbbd0f23 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_91d54591c1b07bc3a93d925cfbbd0f23/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_8d39164e3ebba9d10c8b6ed33dafabc7 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2002 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_91d54591c1b07bc3a93d925cfbbd0f23/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_aec48b6cbce6be7256a2b86efac6aef6 --virtualsize 2097152K --name brick_aec48b6cbce6be7256a2b86efac6aef6 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_aec48b6cbce6be7256a2b86efac6aef6" created. >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_91d54591c1b07bc3a93d925cfbbd0f23/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:31 Creating volume vol_b8f78f128f20b61ef032ce9ee5b6481c replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_aec48b6cbce6be7256a2b86efac6aef6 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_aec48b6cbce6be7256a2b86efac6aef6 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_aec48b6cbce6be7256a2b86efac6aef6 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_aec48b6cbce6be7256a2b86efac6aef6 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_8d39164e3ebba9d10c8b6ed33dafabc7 --virtualsize 2097152K --name brick_8d39164e3ebba9d10c8b6ed33dafabc7 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_8d39164e3ebba9d10c8b6ed33dafabc7" created. >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_26379a9db79930ad00a37453b492ebac >Result: >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_8d39164e3ebba9d10c8b6ed33dafabc7 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_8d39164e3ebba9d10c8b6ed33dafabc7 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_aec48b6cbce6be7256a2b86efac6aef6 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_aec48b6cbce6be7256a2b86efac6aef6 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_8d39164e3ebba9d10c8b6ed33dafabc7 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_8d39164e3ebba9d10c8b6ed33dafabc7 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_aec48b6cbce6be7256a2b86efac6aef6/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2003 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_aec48b6cbce6be7256a2b86efac6aef6/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_26379a9db79930ad00a37453b492ebac --virtualsize 2097152K --name brick_26379a9db79930ad00a37453b492ebac >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_26379a9db79930ad00a37453b492ebac" created. >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_8d39164e3ebba9d10c8b6ed33dafabc7 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_8d39164e3ebba9d10c8b6ed33dafabc7 >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 135.032µs >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_aec48b6cbce6be7256a2b86efac6aef6/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_8d39164e3ebba9d10c8b6ed33dafabc7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_26379a9db79930ad00a37453b492ebac >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_26379a9db79930ad00a37453b492ebac isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6c8f55013bee289cd516ef97746721ff >Result: >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2001 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_8d39164e3ebba9d10c8b6ed33dafabc7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_26379a9db79930ad00a37453b492ebac /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_26379a9db79930ad00a37453b492ebac xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_8d39164e3ebba9d10c8b6ed33dafabc7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_26379a9db79930ad00a37453b492ebac /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_26379a9db79930ad00a37453b492ebac >Result: >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_26379a9db79930ad00a37453b492ebac/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4fd98fc9e1c8a606b4b5c70bb555597e >Result: >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_6c8f55013bee289cd516ef97746721ff --virtualsize 2097152K --name brick_6c8f55013bee289cd516ef97746721ff >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_6c8f55013bee289cd516ef97746721ff" created. >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2003 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_26379a9db79930ad00a37453b492ebac/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_6c8f55013bee289cd516ef97746721ff >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_6c8f55013bee289cd516ef97746721ff isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_26379a9db79930ad00a37453b492ebac/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:32 Creating volume vol_dcaf25d0becadd0bb3c732c2c2ca27da replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_6c8f55013bee289cd516ef97746721ff /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6c8f55013bee289cd516ef97746721ff xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_4fd98fc9e1c8a606b4b5c70bb555597e --virtualsize 2097152K --name brick_4fd98fc9e1c8a606b4b5c70bb555597e >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_4fd98fc9e1c8a606b4b5c70bb555597e" created. >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_c77746b3040810cf2ae10fc585c67fd5 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_6c8f55013bee289cd516ef97746721ff /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6c8f55013bee289cd516ef97746721ff >Result: >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_4fd98fc9e1c8a606b4b5c70bb555597e >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_4fd98fc9e1c8a606b4b5c70bb555597e isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6c8f55013bee289cd516ef97746721ff/brick >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 136.994µs >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_4fd98fc9e1c8a606b4b5c70bb555597e /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4fd98fc9e1c8a606b4b5c70bb555597e xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2004 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6c8f55013bee289cd516ef97746721ff/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6c8f55013bee289cd516ef97746721ff/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_c77746b3040810cf2ae10fc585c67fd5 --virtualsize 2097152K --name brick_c77746b3040810cf2ae10fc585c67fd5 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_c77746b3040810cf2ae10fc585c67fd5" created. >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_4fd98fc9e1c8a606b4b5c70bb555597e /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4fd98fc9e1c8a606b4b5c70bb555597e >Result: >[kubeexec] DEBUG 2018/06/08 08:54:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_cecca27e94262fbaae81106a8cfeb0b2 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4fd98fc9e1c8a606b4b5c70bb555597e/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_c77746b3040810cf2ae10fc585c67fd5 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_c77746b3040810cf2ae10fc585c67fd5 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2002 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4fd98fc9e1c8a606b4b5c70bb555597e/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_c77746b3040810cf2ae10fc585c67fd5 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_c77746b3040810cf2ae10fc585c67fd5 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4fd98fc9e1c8a606b4b5c70bb555597e/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_c77746b3040810cf2ae10fc585c67fd5 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_c77746b3040810cf2ae10fc585c67fd5 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_cecca27e94262fbaae81106a8cfeb0b2 --virtualsize 2097152K --name brick_cecca27e94262fbaae81106a8cfeb0b2 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_cecca27e94262fbaae81106a8cfeb0b2" created. >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_c77746b3040810cf2ae10fc585c67fd5/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_8546c963be0cfb030d57435e94a50d61 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_cecca27e94262fbaae81106a8cfeb0b2 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_cecca27e94262fbaae81106a8cfeb0b2 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2004 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_c77746b3040810cf2ae10fc585c67fd5/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_cecca27e94262fbaae81106a8cfeb0b2 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_cecca27e94262fbaae81106a8cfeb0b2 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_c77746b3040810cf2ae10fc585c67fd5/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:33 Creating volume vol_89ebb1e7eed2ff557488996a1657e75e replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_cecca27e94262fbaae81106a8cfeb0b2 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_cecca27e94262fbaae81106a8cfeb0b2 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_8546c963be0cfb030d57435e94a50d61 --virtualsize 2097152K --name brick_8546c963be0cfb030d57435e94a50d61 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_8546c963be0cfb030d57435e94a50d61" created. >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 122.117µs >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_cecca27e94262fbaae81106a8cfeb0b2/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_cdc8c0d24f68026f06c17385d2fd0029 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2005 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_cecca27e94262fbaae81106a8cfeb0b2/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_8546c963be0cfb030d57435e94a50d61 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_8546c963be0cfb030d57435e94a50d61 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_cecca27e94262fbaae81106a8cfeb0b2/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_8546c963be0cfb030d57435e94a50d61 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_8546c963be0cfb030d57435e94a50d61 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_cdc8c0d24f68026f06c17385d2fd0029 --virtualsize 2097152K --name brick_cdc8c0d24f68026f06c17385d2fd0029 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_cdc8c0d24f68026f06c17385d2fd0029" created. >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_8546c963be0cfb030d57435e94a50d61 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_8546c963be0cfb030d57435e94a50d61 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dddf81e8d43f425cc26134166b21f15e >Result: >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_8546c963be0cfb030d57435e94a50d61/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_cdc8c0d24f68026f06c17385d2fd0029 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_cdc8c0d24f68026f06c17385d2fd0029 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2004 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_8546c963be0cfb030d57435e94a50d61/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_cdc8c0d24f68026f06c17385d2fd0029 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_cdc8c0d24f68026f06c17385d2fd0029 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_8546c963be0cfb030d57435e94a50d61/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_cdc8c0d24f68026f06c17385d2fd0029 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_cdc8c0d24f68026f06c17385d2fd0029 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_dddf81e8d43f425cc26134166b21f15e --virtualsize 2097152K --name brick_dddf81e8d43f425cc26134166b21f15e >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_dddf81e8d43f425cc26134166b21f15e" created. >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_cdc8c0d24f68026f06c17385d2fd0029/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_6319c734fbf95bbbfb1156b24dcc43ea >Result: >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dddf81e8d43f425cc26134166b21f15e >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dddf81e8d43f425cc26134166b21f15e isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2005 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_cdc8c0d24f68026f06c17385d2fd0029/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dddf81e8d43f425cc26134166b21f15e /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dddf81e8d43f425cc26134166b21f15e xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 91.519µs >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_cdc8c0d24f68026f06c17385d2fd0029/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:34 Creating volume vol_1918777ef3ce84df17c8a114fb89f33e replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dddf81e8d43f425cc26134166b21f15e /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dddf81e8d43f425cc26134166b21f15e >Result: >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_6319c734fbf95bbbfb1156b24dcc43ea --virtualsize 2097152K --name brick_6319c734fbf95bbbfb1156b24dcc43ea >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_6319c734fbf95bbbfb1156b24dcc43ea" created. >[kubeexec] DEBUG 2018/06/08 08:54:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dddf81e8d43f425cc26134166b21f15e/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d985a824b73813069566b0a71bb73269 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_6319c734fbf95bbbfb1156b24dcc43ea >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_6319c734fbf95bbbfb1156b24dcc43ea isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2007 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dddf81e8d43f425cc26134166b21f15e/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_6319c734fbf95bbbfb1156b24dcc43ea /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_6319c734fbf95bbbfb1156b24dcc43ea xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dddf81e8d43f425cc26134166b21f15e/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_6319c734fbf95bbbfb1156b24dcc43ea /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_6319c734fbf95bbbfb1156b24dcc43ea >Result: >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_d985a824b73813069566b0a71bb73269 --virtualsize 2097152K --name brick_d985a824b73813069566b0a71bb73269 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_d985a824b73813069566b0a71bb73269" created. >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_6319c734fbf95bbbfb1156b24dcc43ea/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_0a07a24688dfe13d4e1726b8d05e24a7 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d985a824b73813069566b0a71bb73269 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d985a824b73813069566b0a71bb73269 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2003 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_6319c734fbf95bbbfb1156b24dcc43ea/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d985a824b73813069566b0a71bb73269 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d985a824b73813069566b0a71bb73269 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_6319c734fbf95bbbfb1156b24dcc43ea/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d985a824b73813069566b0a71bb73269 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d985a824b73813069566b0a71bb73269 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_0a07a24688dfe13d4e1726b8d05e24a7 --virtualsize 2097152K --name brick_0a07a24688dfe13d4e1726b8d05e24a7 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_0a07a24688dfe13d4e1726b8d05e24a7" created. >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 235.352µs >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d985a824b73813069566b0a71bb73269/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_322eeb9ae02c4f9e338567566394d2c7 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_0a07a24688dfe13d4e1726b8d05e24a7 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_0a07a24688dfe13d4e1726b8d05e24a7 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started DELETE /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Started DELETE /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 202 Accepted in 21.165555ms >[asynchttp] INFO 2018/06/08 08:54:35 asynchttp.go:288: Started job cc7d3ffb84c36288d9c4c22cf55e3ada >[heketi] INFO 2018/06/08 08:54:35 Started async operation: Delete Volume >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 88.54µs >[negroni] Completed 202 Accepted in 27.561916ms >[asynchttp] INFO 2018/06/08 08:54:35 asynchttp.go:288: Started job 30a446e1b3f0ca388030dee9d6823363 >[heketi] INFO 2018/06/08 08:54:35 Started async operation: Delete Volume >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 131.6µs >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2007 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d985a824b73813069566b0a71bb73269/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_0a07a24688dfe13d4e1726b8d05e24a7 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_0a07a24688dfe13d4e1726b8d05e24a7 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d985a824b73813069566b0a71bb73269/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:36 Creating volume vol_afef5f4a84abec3ab3e4f2a5bae2db23 replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_0a07a24688dfe13d4e1726b8d05e24a7 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_0a07a24688dfe13d4e1726b8d05e24a7 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_322eeb9ae02c4f9e338567566394d2c7 --virtualsize 2097152K --name brick_322eeb9ae02c4f9e338567566394d2c7 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_322eeb9ae02c4f9e338567566394d2c7" created. >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_0a07a24688dfe13d4e1726b8d05e24a7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bd4fcfec1db80add56b57e18bd681378 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2006 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_0a07a24688dfe13d4e1726b8d05e24a7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_322eeb9ae02c4f9e338567566394d2c7 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_322eeb9ae02c4f9e338567566394d2c7 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_0a07a24688dfe13d4e1726b8d05e24a7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_322eeb9ae02c4f9e338567566394d2c7 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_322eeb9ae02c4f9e338567566394d2c7 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_bd4fcfec1db80add56b57e18bd681378 --virtualsize 2097152K --name brick_bd4fcfec1db80add56b57e18bd681378 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_bd4fcfec1db80add56b57e18bd681378" created. >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_322eeb9ae02c4f9e338567566394d2c7 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_322eeb9ae02c4f9e338567566394d2c7 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c740e50db586c869e27205aee2820dd7 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_322eeb9ae02c4f9e338567566394d2c7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_bd4fcfec1db80add56b57e18bd681378 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_bd4fcfec1db80add56b57e18bd681378 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2007 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_322eeb9ae02c4f9e338567566394d2c7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_bd4fcfec1db80add56b57e18bd681378 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bd4fcfec1db80add56b57e18bd681378 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 120.57µs >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_322eeb9ae02c4f9e338567566394d2c7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_bd4fcfec1db80add56b57e18bd681378 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bd4fcfec1db80add56b57e18bd681378 >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 114.311µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 147.951µs >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_c740e50db586c869e27205aee2820dd7 --virtualsize 2097152K --name brick_c740e50db586c869e27205aee2820dd7 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_c740e50db586c869e27205aee2820dd7" created. >[kubeexec] DEBUG 2018/06/08 08:54:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bd4fcfec1db80add56b57e18bd681378/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_3ca7a79bfe1ee21131ef20732eac93d7 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_c740e50db586c869e27205aee2820dd7 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_c740e50db586c869e27205aee2820dd7 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2006 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bd4fcfec1db80add56b57e18bd681378/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_c740e50db586c869e27205aee2820dd7 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c740e50db586c869e27205aee2820dd7 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bd4fcfec1db80add56b57e18bd681378/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:37 Creating volume vol_daf13b7f607d1b280c78e909af25a215 replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_c740e50db586c869e27205aee2820dd7 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c740e50db586c869e27205aee2820dd7 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_3ca7a79bfe1ee21131ef20732eac93d7 --virtualsize 2097152K --name brick_3ca7a79bfe1ee21131ef20732eac93d7 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_3ca7a79bfe1ee21131ef20732eac93d7" created. >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f88d16c3bc0200c12aa9db817d4d9f0a >Result: >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c740e50db586c869e27205aee2820dd7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_3ca7a79bfe1ee21131ef20732eac93d7 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_3ca7a79bfe1ee21131ef20732eac93d7 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2001 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c740e50db586c869e27205aee2820dd7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_3ca7a79bfe1ee21131ef20732eac93d7 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_3ca7a79bfe1ee21131ef20732eac93d7 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c740e50db586c869e27205aee2820dd7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_f88d16c3bc0200c12aa9db817d4d9f0a --virtualsize 2097152K --name brick_f88d16c3bc0200c12aa9db817d4d9f0a >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_f88d16c3bc0200c12aa9db817d4d9f0a" created. >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_3ca7a79bfe1ee21131ef20732eac93d7 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_3ca7a79bfe1ee21131ef20732eac93d7 >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 126.576µs >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_b35881161d7bf25bfa4af1bea4746d58 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_3ca7a79bfe1ee21131ef20732eac93d7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_f88d16c3bc0200c12aa9db817d4d9f0a >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_f88d16c3bc0200c12aa9db817d4d9f0a isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 92.327µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 152.783µs >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_f88d16c3bc0200c12aa9db817d4d9f0a /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f88d16c3bc0200c12aa9db817d4d9f0a xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2006 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_3ca7a79bfe1ee21131ef20732eac93d7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_3ca7a79bfe1ee21131ef20732eac93d7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_f88d16c3bc0200c12aa9db817d4d9f0a /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f88d16c3bc0200c12aa9db817d4d9f0a >Result: >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_b35881161d7bf25bfa4af1bea4746d58 --virtualsize 2097152K --name brick_b35881161d7bf25bfa4af1bea4746d58 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_b35881161d7bf25bfa4af1bea4746d58" created. >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_30eb22cad2c3536d381b8d71943cffd2 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f88d16c3bc0200c12aa9db817d4d9f0a/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_b35881161d7bf25bfa4af1bea4746d58 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_b35881161d7bf25bfa4af1bea4746d58 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2001 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f88d16c3bc0200c12aa9db817d4d9f0a/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_b35881161d7bf25bfa4af1bea4746d58 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_b35881161d7bf25bfa4af1bea4746d58 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f88d16c3bc0200c12aa9db817d4d9f0a/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:38 Creating volume vol_337bf2c01bf8c45eec5bab53ad5c2e46 replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_b35881161d7bf25bfa4af1bea4746d58 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_b35881161d7bf25bfa4af1bea4746d58 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_30eb22cad2c3536d381b8d71943cffd2 --virtualsize 2097152K --name brick_30eb22cad2c3536d381b8d71943cffd2 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_30eb22cad2c3536d381b8d71943cffd2" created. >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_a84d4290cbebd984dd11b88eb806d93b >Result: >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_b35881161d7bf25bfa4af1bea4746d58/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_30eb22cad2c3536d381b8d71943cffd2 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_30eb22cad2c3536d381b8d71943cffd2 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2002 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_b35881161d7bf25bfa4af1bea4746d58/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_30eb22cad2c3536d381b8d71943cffd2 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_30eb22cad2c3536d381b8d71943cffd2 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 85.469µs >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_b35881161d7bf25bfa4af1bea4746d58/brick >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 119.632µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 116.42µs >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_30eb22cad2c3536d381b8d71943cffd2 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_30eb22cad2c3536d381b8d71943cffd2 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_a84d4290cbebd984dd11b88eb806d93b --virtualsize 2097152K --name brick_a84d4290cbebd984dd11b88eb806d93b >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_a84d4290cbebd984dd11b88eb806d93b" created. >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_bf4ebc8a0c81383f3dc357f04e4903c1 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_30eb22cad2c3536d381b8d71943cffd2/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_a84d4290cbebd984dd11b88eb806d93b >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_a84d4290cbebd984dd11b88eb806d93b isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2005 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_30eb22cad2c3536d381b8d71943cffd2/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_a84d4290cbebd984dd11b88eb806d93b /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_a84d4290cbebd984dd11b88eb806d93b xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_30eb22cad2c3536d381b8d71943cffd2/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_a84d4290cbebd984dd11b88eb806d93b /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_a84d4290cbebd984dd11b88eb806d93b >Result: >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_bf4ebc8a0c81383f3dc357f04e4903c1 --virtualsize 2097152K --name brick_bf4ebc8a0c81383f3dc357f04e4903c1 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_bf4ebc8a0c81383f3dc357f04e4903c1" created. >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_a84d4290cbebd984dd11b88eb806d93b/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_02dbc9628a1cd8a5f20d9fea7a615dcc >Result: >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_bf4ebc8a0c81383f3dc357f04e4903c1 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_bf4ebc8a0c81383f3dc357f04e4903c1 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2002 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_a84d4290cbebd984dd11b88eb806d93b/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_bf4ebc8a0c81383f3dc357f04e4903c1 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_bf4ebc8a0c81383f3dc357f04e4903c1 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_a84d4290cbebd984dd11b88eb806d93b/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:39 Creating volume vol_f73270331b95278f490fd1dfe0b010df replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_bf4ebc8a0c81383f3dc357f04e4903c1 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_bf4ebc8a0c81383f3dc357f04e4903c1 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_02dbc9628a1cd8a5f20d9fea7a615dcc --virtualsize 2097152K --name brick_02dbc9628a1cd8a5f20d9fea7a615dcc >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_02dbc9628a1cd8a5f20d9fea7a615dcc" created. >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 105.998µs >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_bf4ebc8a0c81383f3dc357f04e4903c1/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2bba219914cb29c5994021b1731b7763 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_02dbc9628a1cd8a5f20d9fea7a615dcc >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_02dbc9628a1cd8a5f20d9fea7a615dcc isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 116.679µs >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2004 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_bf4ebc8a0c81383f3dc357f04e4903c1/brick >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 83.609µs >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_02dbc9628a1cd8a5f20d9fea7a615dcc /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_02dbc9628a1cd8a5f20d9fea7a615dcc xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_bf4ebc8a0c81383f3dc357f04e4903c1/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_02dbc9628a1cd8a5f20d9fea7a615dcc /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_02dbc9628a1cd8a5f20d9fea7a615dcc >Result: >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_2bba219914cb29c5994021b1731b7763 --virtualsize 2097152K --name brick_2bba219914cb29c5994021b1731b7763 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_2bba219914cb29c5994021b1731b7763" created. >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_02dbc9628a1cd8a5f20d9fea7a615dcc/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6167bec47d9258cf5b2a5e3a48c0d391 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2bba219914cb29c5994021b1731b7763 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2bba219914cb29c5994021b1731b7763 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2001 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_02dbc9628a1cd8a5f20d9fea7a615dcc/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2bba219914cb29c5994021b1731b7763 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2bba219914cb29c5994021b1731b7763 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_02dbc9628a1cd8a5f20d9fea7a615dcc/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2bba219914cb29c5994021b1731b7763 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2bba219914cb29c5994021b1731b7763 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_6167bec47d9258cf5b2a5e3a48c0d391 --virtualsize 2097152K --name brick_6167bec47d9258cf5b2a5e3a48c0d391 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_6167bec47d9258cf5b2a5e3a48c0d391" created. >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_dde70bf2e86fac7df0e0427af9bf5db3 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2bba219914cb29c5994021b1731b7763/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_6167bec47d9258cf5b2a5e3a48c0d391 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_6167bec47d9258cf5b2a5e3a48c0d391 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2004 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2bba219914cb29c5994021b1731b7763/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_6167bec47d9258cf5b2a5e3a48c0d391 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6167bec47d9258cf5b2a5e3a48c0d391 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 238.953µs >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2bba219914cb29c5994021b1731b7763/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:40 Creating volume vol_1ef58be42cf8ea7cf9298cff303e903a replica 3 >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 97.468µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 109.285µs >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_6167bec47d9258cf5b2a5e3a48c0d391 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6167bec47d9258cf5b2a5e3a48c0d391 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_dde70bf2e86fac7df0e0427af9bf5db3 --virtualsize 2097152K --name brick_dde70bf2e86fac7df0e0427af9bf5db3 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_dde70bf2e86fac7df0e0427af9bf5db3" created. >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_b2827681c0e1655380b4d5d53226bf7e >Result: >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6167bec47d9258cf5b2a5e3a48c0d391/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_dde70bf2e86fac7df0e0427af9bf5db3 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_dde70bf2e86fac7df0e0427af9bf5db3 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2003 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6167bec47d9258cf5b2a5e3a48c0d391/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_dde70bf2e86fac7df0e0427af9bf5db3 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_dde70bf2e86fac7df0e0427af9bf5db3 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6167bec47d9258cf5b2a5e3a48c0d391/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_dde70bf2e86fac7df0e0427af9bf5db3 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_dde70bf2e86fac7df0e0427af9bf5db3 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_b2827681c0e1655380b4d5d53226bf7e --virtualsize 2097152K --name brick_b2827681c0e1655380b4d5d53226bf7e >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_b2827681c0e1655380b4d5d53226bf7e" created. >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9963c5d95158f58c8fc2f28d81d925ea >Result: >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_dde70bf2e86fac7df0e0427af9bf5db3/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_b2827681c0e1655380b4d5d53226bf7e >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_b2827681c0e1655380b4d5d53226bf7e isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2002 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_dde70bf2e86fac7df0e0427af9bf5db3/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_b2827681c0e1655380b4d5d53226bf7e /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_b2827681c0e1655380b4d5d53226bf7e xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_dde70bf2e86fac7df0e0427af9bf5db3/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_b2827681c0e1655380b4d5d53226bf7e /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_b2827681c0e1655380b4d5d53226bf7e >Result: >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_9963c5d95158f58c8fc2f28d81d925ea --virtualsize 2097152K --name brick_9963c5d95158f58c8fc2f28d81d925ea >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_9963c5d95158f58c8fc2f28d81d925ea" created. >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 135.223µs >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_da658ca7b4961a716b886b57a67ff000 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_b2827681c0e1655380b4d5d53226bf7e/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_9963c5d95158f58c8fc2f28d81d925ea >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_9963c5d95158f58c8fc2f28d81d925ea isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 155.571µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 220.734µs >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2003 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_b2827681c0e1655380b4d5d53226bf7e/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_9963c5d95158f58c8fc2f28d81d925ea /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9963c5d95158f58c8fc2f28d81d925ea xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_b2827681c0e1655380b4d5d53226bf7e/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:42 Creating volume vol_1a828ff2310b778d09ffdadd755dc5ee replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_9963c5d95158f58c8fc2f28d81d925ea /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9963c5d95158f58c8fc2f28d81d925ea >Result: >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_da658ca7b4961a716b886b57a67ff000 --virtualsize 2097152K --name brick_da658ca7b4961a716b886b57a67ff000 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_da658ca7b4961a716b886b57a67ff000" created. >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9963c5d95158f58c8fc2f28d81d925ea/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_a1150dfd2cad12bad69ef9cc830da83d >Result: >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_da658ca7b4961a716b886b57a67ff000 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_da658ca7b4961a716b886b57a67ff000 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2007 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9963c5d95158f58c8fc2f28d81d925ea/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_da658ca7b4961a716b886b57a67ff000 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_da658ca7b4961a716b886b57a67ff000 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9963c5d95158f58c8fc2f28d81d925ea/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_da658ca7b4961a716b886b57a67ff000 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_da658ca7b4961a716b886b57a67ff000 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_a1150dfd2cad12bad69ef9cc830da83d --virtualsize 2097152K --name brick_a1150dfd2cad12bad69ef9cc830da83d >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_a1150dfd2cad12bad69ef9cc830da83d" created. >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4d4160003ee228514a59c2c61932ceed >Result: >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_da658ca7b4961a716b886b57a67ff000/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_a1150dfd2cad12bad69ef9cc830da83d >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_a1150dfd2cad12bad69ef9cc830da83d isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2003 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_da658ca7b4961a716b886b57a67ff000/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_a1150dfd2cad12bad69ef9cc830da83d /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_a1150dfd2cad12bad69ef9cc830da83d xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 101.395µs >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_da658ca7b4961a716b886b57a67ff000/brick >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 116.084µs >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_a1150dfd2cad12bad69ef9cc830da83d /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_a1150dfd2cad12bad69ef9cc830da83d >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 204.43µs >[kubeexec] DEBUG 2018/06/08 08:54:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_4d4160003ee228514a59c2c61932ceed --virtualsize 2097152K --name brick_4d4160003ee228514a59c2c61932ceed >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_4d4160003ee228514a59c2c61932ceed" created. >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_a1150dfd2cad12bad69ef9cc830da83d/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0a39774f01ac7959661ff5afff2595ab >Result: >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_4d4160003ee228514a59c2c61932ceed >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_4d4160003ee228514a59c2c61932ceed isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2007 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_a1150dfd2cad12bad69ef9cc830da83d/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_4d4160003ee228514a59c2c61932ceed /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4d4160003ee228514a59c2c61932ceed xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_a1150dfd2cad12bad69ef9cc830da83d/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:43 Creating volume vol_9fb2830da79dd70d910dad8426dc236f replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_4d4160003ee228514a59c2c61932ceed /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4d4160003ee228514a59c2c61932ceed >Result: >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4d4160003ee228514a59c2c61932ceed/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_211fca2005300a58cf3b464b89e9deea >Result: >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_0a39774f01ac7959661ff5afff2595ab --virtualsize 2097152K --name brick_0a39774f01ac7959661ff5afff2595ab >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_0a39774f01ac7959661ff5afff2595ab" created. >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2006 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4d4160003ee228514a59c2c61932ceed/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_0a39774f01ac7959661ff5afff2595ab >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_0a39774f01ac7959661ff5afff2595ab isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4d4160003ee228514a59c2c61932ceed/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_0a39774f01ac7959661ff5afff2595ab /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0a39774f01ac7959661ff5afff2595ab xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_211fca2005300a58cf3b464b89e9deea --virtualsize 2097152K --name brick_211fca2005300a58cf3b464b89e9deea >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_211fca2005300a58cf3b464b89e9deea" created. >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_0a39774f01ac7959661ff5afff2595ab /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0a39774f01ac7959661ff5afff2595ab >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 121.029µs >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_df153caad54b6511504d5af0cdc4e2b5 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_211fca2005300a58cf3b464b89e9deea >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_211fca2005300a58cf3b464b89e9deea isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0a39774f01ac7959661ff5afff2595ab/brick >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 114.319µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 108.827µs >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_211fca2005300a58cf3b464b89e9deea /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_211fca2005300a58cf3b464b89e9deea xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2004 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0a39774f01ac7959661ff5afff2595ab/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0a39774f01ac7959661ff5afff2595ab/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_211fca2005300a58cf3b464b89e9deea /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_211fca2005300a58cf3b464b89e9deea >Result: >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_df153caad54b6511504d5af0cdc4e2b5 --virtualsize 2097152K --name brick_df153caad54b6511504d5af0cdc4e2b5 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_df153caad54b6511504d5af0cdc4e2b5" created. >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_c8dd6b39719a6bc75ea331fae5a92396 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_211fca2005300a58cf3b464b89e9deea/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_df153caad54b6511504d5af0cdc4e2b5 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_df153caad54b6511504d5af0cdc4e2b5 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2006 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_211fca2005300a58cf3b464b89e9deea/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_df153caad54b6511504d5af0cdc4e2b5 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_df153caad54b6511504d5af0cdc4e2b5 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_211fca2005300a58cf3b464b89e9deea/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:44 Creating volume vol_dc9ab13a25ccbad8262fba92766a31f9 replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_c8dd6b39719a6bc75ea331fae5a92396 --virtualsize 2097152K --name brick_c8dd6b39719a6bc75ea331fae5a92396 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_c8dd6b39719a6bc75ea331fae5a92396" created. >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_df153caad54b6511504d5af0cdc4e2b5 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_df153caad54b6511504d5af0cdc4e2b5 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_54258f15bd3df0dfb8421196bdea9774 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_df153caad54b6511504d5af0cdc4e2b5/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_c8dd6b39719a6bc75ea331fae5a92396 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_c8dd6b39719a6bc75ea331fae5a92396 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2005 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_df153caad54b6511504d5af0cdc4e2b5/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_c8dd6b39719a6bc75ea331fae5a92396 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_c8dd6b39719a6bc75ea331fae5a92396 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 112.912µs >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_df153caad54b6511504d5af0cdc4e2b5/brick >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 110.642µs >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_c8dd6b39719a6bc75ea331fae5a92396 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_c8dd6b39719a6bc75ea331fae5a92396 >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 121.123µs >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_54258f15bd3df0dfb8421196bdea9774 --virtualsize 2097152K --name brick_54258f15bd3df0dfb8421196bdea9774 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_54258f15bd3df0dfb8421196bdea9774" created. >[kubeexec] DEBUG 2018/06/08 08:54:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_c8dd6b39719a6bc75ea331fae5a92396/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e1709c1db8499b885817e5f4682c56ca >Result: >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_54258f15bd3df0dfb8421196bdea9774 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_54258f15bd3df0dfb8421196bdea9774 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2005 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_c8dd6b39719a6bc75ea331fae5a92396/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_54258f15bd3df0dfb8421196bdea9774 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_54258f15bd3df0dfb8421196bdea9774 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_c8dd6b39719a6bc75ea331fae5a92396/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_54258f15bd3df0dfb8421196bdea9774 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_54258f15bd3df0dfb8421196bdea9774 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_e1709c1db8499b885817e5f4682c56ca --virtualsize 2097152K --name brick_e1709c1db8499b885817e5f4682c56ca >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_e1709c1db8499b885817e5f4682c56ca" created. >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_54258f15bd3df0dfb8421196bdea9774/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e54cabdc56fef4e5b4a11c7a72eaff3d >Result: >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_e1709c1db8499b885817e5f4682c56ca >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_e1709c1db8499b885817e5f4682c56ca isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2005 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_54258f15bd3df0dfb8421196bdea9774/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_54258f15bd3df0dfb8421196bdea9774/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:45 Creating volume vol_b7fb8f19c77039982909b8868f7af4cc replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_e1709c1db8499b885817e5f4682c56ca /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e1709c1db8499b885817e5f4682c56ca xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_e1709c1db8499b885817e5f4682c56ca /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e1709c1db8499b885817e5f4682c56ca >Result: >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_e54cabdc56fef4e5b4a11c7a72eaff3d --virtualsize 2097152K --name brick_e54cabdc56fef4e5b4a11c7a72eaff3d >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_e54cabdc56fef4e5b4a11c7a72eaff3d" created. >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 124.051µs >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_a69c00298f148d2208ea7eab0903a3d2 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e1709c1db8499b885817e5f4682c56ca/brick >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 111.668µs >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e54cabdc56fef4e5b4a11c7a72eaff3d >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e54cabdc56fef4e5b4a11c7a72eaff3d isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 151.172µs >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2001 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e1709c1db8499b885817e5f4682c56ca/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e54cabdc56fef4e5b4a11c7a72eaff3d /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e54cabdc56fef4e5b4a11c7a72eaff3d xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e1709c1db8499b885817e5f4682c56ca/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_a69c00298f148d2208ea7eab0903a3d2 --virtualsize 2097152K --name brick_a69c00298f148d2208ea7eab0903a3d2 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_a69c00298f148d2208ea7eab0903a3d2" created. >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e54cabdc56fef4e5b4a11c7a72eaff3d /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e54cabdc56fef4e5b4a11c7a72eaff3d >Result: >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_2a97d96e94e0ff7c49d2b6d81dfbd8fd >Result: >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e54cabdc56fef4e5b4a11c7a72eaff3d/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_a69c00298f148d2208ea7eab0903a3d2 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_a69c00298f148d2208ea7eab0903a3d2 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2006 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e54cabdc56fef4e5b4a11c7a72eaff3d/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_a69c00298f148d2208ea7eab0903a3d2 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_a69c00298f148d2208ea7eab0903a3d2 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e54cabdc56fef4e5b4a11c7a72eaff3d/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_2a97d96e94e0ff7c49d2b6d81dfbd8fd --virtualsize 2097152K --name brick_2a97d96e94e0ff7c49d2b6d81dfbd8fd >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_2a97d96e94e0ff7c49d2b6d81dfbd8fd" created. >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_a69c00298f148d2208ea7eab0903a3d2 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_a69c00298f148d2208ea7eab0903a3d2 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_68704fc0eb854fe2a0157b0261302792 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_a69c00298f148d2208ea7eab0903a3d2/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_2a97d96e94e0ff7c49d2b6d81dfbd8fd >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_2a97d96e94e0ff7c49d2b6d81dfbd8fd isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2001 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_a69c00298f148d2208ea7eab0903a3d2/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_2a97d96e94e0ff7c49d2b6d81dfbd8fd /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_2a97d96e94e0ff7c49d2b6d81dfbd8fd xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 162.802µs >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_a69c00298f148d2208ea7eab0903a3d2/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:46 Creating volume vol_e25438438fd2d50a0b07f26b4bfb338a replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_2a97d96e94e0ff7c49d2b6d81dfbd8fd /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_2a97d96e94e0ff7c49d2b6d81dfbd8fd >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 80.972µs >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_68704fc0eb854fe2a0157b0261302792 --virtualsize 2097152K --name brick_68704fc0eb854fe2a0157b0261302792 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_68704fc0eb854fe2a0157b0261302792" created. >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 340.123µs >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_2a97d96e94e0ff7c49d2b6d81dfbd8fd/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b4e462e2cc3dbccfd86e44f907dc7f00 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_68704fc0eb854fe2a0157b0261302792 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_68704fc0eb854fe2a0157b0261302792 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2002 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_2a97d96e94e0ff7c49d2b6d81dfbd8fd/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_68704fc0eb854fe2a0157b0261302792 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_68704fc0eb854fe2a0157b0261302792 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_2a97d96e94e0ff7c49d2b6d81dfbd8fd/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_68704fc0eb854fe2a0157b0261302792 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_68704fc0eb854fe2a0157b0261302792 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_b4e462e2cc3dbccfd86e44f907dc7f00 --virtualsize 2097152K --name brick_b4e462e2cc3dbccfd86e44f907dc7f00 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_b4e462e2cc3dbccfd86e44f907dc7f00" created. >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_68704fc0eb854fe2a0157b0261302792/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_906b174d0f419aa3d1a9affe4675674a >Result: >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_b4e462e2cc3dbccfd86e44f907dc7f00 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_b4e462e2cc3dbccfd86e44f907dc7f00 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2007 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_68704fc0eb854fe2a0157b0261302792/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_b4e462e2cc3dbccfd86e44f907dc7f00 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b4e462e2cc3dbccfd86e44f907dc7f00 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_68704fc0eb854fe2a0157b0261302792/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_906b174d0f419aa3d1a9affe4675674a --virtualsize 2097152K --name brick_906b174d0f419aa3d1a9affe4675674a >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_906b174d0f419aa3d1a9affe4675674a" created. >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_b4e462e2cc3dbccfd86e44f907dc7f00 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b4e462e2cc3dbccfd86e44f907dc7f00 >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 122.165µs >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_ad542929ce4fd8719fbe5fc44df98dbd >Result: >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b4e462e2cc3dbccfd86e44f907dc7f00/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_906b174d0f419aa3d1a9affe4675674a >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_906b174d0f419aa3d1a9affe4675674a isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 113.23µs >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2002 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b4e462e2cc3dbccfd86e44f907dc7f00/brick >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 399.463µs >[kubeexec] DEBUG 2018/06/08 08:54:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_906b174d0f419aa3d1a9affe4675674a /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_906b174d0f419aa3d1a9affe4675674a xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b4e462e2cc3dbccfd86e44f907dc7f00/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:48 Creating volume vol_d21ed39ae095af2674175a798a0cb02c replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_906b174d0f419aa3d1a9affe4675674a /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_906b174d0f419aa3d1a9affe4675674a >Result: >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_ad542929ce4fd8719fbe5fc44df98dbd --virtualsize 1048576K --name brick_ad542929ce4fd8719fbe5fc44df98dbd >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_ad542929ce4fd8719fbe5fc44df98dbd" created. >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_906b174d0f419aa3d1a9affe4675674a/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d1e261647e36a4dd323508df5b3decff >Result: >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_ad542929ce4fd8719fbe5fc44df98dbd >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_ad542929ce4fd8719fbe5fc44df98dbd isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2003 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_906b174d0f419aa3d1a9affe4675674a/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_ad542929ce4fd8719fbe5fc44df98dbd /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_ad542929ce4fd8719fbe5fc44df98dbd xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_906b174d0f419aa3d1a9affe4675674a/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_ad542929ce4fd8719fbe5fc44df98dbd /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_ad542929ce4fd8719fbe5fc44df98dbd >Result: >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_d1e261647e36a4dd323508df5b3decff --virtualsize 2097152K --name brick_d1e261647e36a4dd323508df5b3decff >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_d1e261647e36a4dd323508df5b3decff" created. >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_592a78c302b01c3ee9538269481405e7 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_ad542929ce4fd8719fbe5fc44df98dbd/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d1e261647e36a4dd323508df5b3decff >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d1e261647e36a4dd323508df5b3decff isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 117.729µs >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d1e261647e36a4dd323508df5b3decff /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d1e261647e36a4dd323508df5b3decff xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 137.957µs >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_592a78c302b01c3ee9538269481405e7 --virtualsize 2097152K --name brick_592a78c302b01c3ee9538269481405e7 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_592a78c302b01c3ee9538269481405e7" created. >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 97.257µs >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d1e261647e36a4dd323508df5b3decff /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d1e261647e36a4dd323508df5b3decff >Result: >[kubeexec] DEBUG 2018/06/08 08:54:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_1039ed81d7d98ea183e9c7d8d00c1b6d >Result: >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_592a78c302b01c3ee9538269481405e7 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_592a78c302b01c3ee9538269481405e7 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d1e261647e36a4dd323508df5b3decff/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_592a78c302b01c3ee9538269481405e7 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_592a78c302b01c3ee9538269481405e7 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2003 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d1e261647e36a4dd323508df5b3decff/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_592a78c302b01c3ee9538269481405e7 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_592a78c302b01c3ee9538269481405e7 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d1e261647e36a4dd323508df5b3decff/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:49 Creating volume vol_21481d8911fe8ec238d97d71c1aa5cb3 replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_1039ed81d7d98ea183e9c7d8d00c1b6d --virtualsize 1048576K --name brick_1039ed81d7d98ea183e9c7d8d00c1b6d >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_1039ed81d7d98ea183e9c7d8d00c1b6d" created. >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_592a78c302b01c3ee9538269481405e7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_11a59fea08d9b732822c2669ddaf54fa >Result: >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2004 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_592a78c302b01c3ee9538269481405e7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_1039ed81d7d98ea183e9c7d8d00c1b6d >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_1039ed81d7d98ea183e9c7d8d00c1b6d isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_592a78c302b01c3ee9538269481405e7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_1039ed81d7d98ea183e9c7d8d00c1b6d /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_1039ed81d7d98ea183e9c7d8d00c1b6d xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_1039ed81d7d98ea183e9c7d8d00c1b6d /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_1039ed81d7d98ea183e9c7d8d00c1b6d >Result: >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_11a59fea08d9b732822c2669ddaf54fa --virtualsize 2097152K --name brick_11a59fea08d9b732822c2669ddaf54fa >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_11a59fea08d9b732822c2669ddaf54fa" created. >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 152.692µs >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dba8827a8c1b642d1b34bc5cf35aa4b4 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_1039ed81d7d98ea183e9c7d8d00c1b6d/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_11a59fea08d9b732822c2669ddaf54fa >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_11a59fea08d9b732822c2669ddaf54fa isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 139.03µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 110.711µs >[kubeexec] DEBUG 2018/06/08 08:54:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_11a59fea08d9b732822c2669ddaf54fa /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_11a59fea08d9b732822c2669ddaf54fa xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_11a59fea08d9b732822c2669ddaf54fa /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_11a59fea08d9b732822c2669ddaf54fa >Result: >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_d389f0278a774bd7443a09af960961d8/tp_dba8827a8c1b642d1b34bc5cf35aa4b4 --virtualsize 2097152K --name brick_dba8827a8c1b642d1b34bc5cf35aa4b4 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_dba8827a8c1b642d1b34bc5cf35aa4b4" created. >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_11a59fea08d9b732822c2669ddaf54fa/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e1c7bf2add37671886c54c55f98f9fb7 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dba8827a8c1b642d1b34bc5cf35aa4b4 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dba8827a8c1b642d1b34bc5cf35aa4b4 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2004 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_11a59fea08d9b732822c2669ddaf54fa/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dba8827a8c1b642d1b34bc5cf35aa4b4 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dba8827a8c1b642d1b34bc5cf35aa4b4 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_11a59fea08d9b732822c2669ddaf54fa/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:50 Creating volume vol_226838416791f3286fcacb7e5f1ff59d replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dba8827a8c1b642d1b34bc5cf35aa4b4 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dba8827a8c1b642d1b34bc5cf35aa4b4 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_e1c7bf2add37671886c54c55f98f9fb7 --virtualsize 1048576K --name brick_e1c7bf2add37671886c54c55f98f9fb7 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_e1c7bf2add37671886c54c55f98f9fb7" created. >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b3918aae89c9aab7c5cfa3496f95936a >Result: >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dba8827a8c1b642d1b34bc5cf35aa4b4/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e1c7bf2add37671886c54c55f98f9fb7 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e1c7bf2add37671886c54c55f98f9fb7 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2005 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dba8827a8c1b642d1b34bc5cf35aa4b4/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e1c7bf2add37671886c54c55f98f9fb7 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e1c7bf2add37671886c54c55f98f9fb7 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dba8827a8c1b642d1b34bc5cf35aa4b4/brick >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 133.244µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 127µs >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_b3918aae89c9aab7c5cfa3496f95936a --virtualsize 2097152K --name brick_b3918aae89c9aab7c5cfa3496f95936a >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_b3918aae89c9aab7c5cfa3496f95936a" created. >[kubeexec] DEBUG 2018/06/08 08:54:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e1c7bf2add37671886c54c55f98f9fb7 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e1c7bf2add37671886c54c55f98f9fb7 >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 157.908µs >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_01b4fda3de4a6a9df5de0de9826ad1aa >Result: >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e1c7bf2add37671886c54c55f98f9fb7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_b3918aae89c9aab7c5cfa3496f95936a >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_b3918aae89c9aab7c5cfa3496f95936a isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_b3918aae89c9aab7c5cfa3496f95936a /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b3918aae89c9aab7c5cfa3496f95936a xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_b3918aae89c9aab7c5cfa3496f95936a /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b3918aae89c9aab7c5cfa3496f95936a >Result: >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b3918aae89c9aab7c5cfa3496f95936a/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_01b4fda3de4a6a9df5de0de9826ad1aa --virtualsize 2097152K --name brick_01b4fda3de4a6a9df5de0de9826ad1aa >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_01b4fda3de4a6a9df5de0de9826ad1aa" created. >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e631f0bf543f6c06867077cd16aad9e2 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2005 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b3918aae89c9aab7c5cfa3496f95936a/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_01b4fda3de4a6a9df5de0de9826ad1aa >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_01b4fda3de4a6a9df5de0de9826ad1aa isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b3918aae89c9aab7c5cfa3496f95936a/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:51 Creating volume vol_5d0dfb0ebb846fcd225c890ec9cdb885 replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_01b4fda3de4a6a9df5de0de9826ad1aa /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_01b4fda3de4a6a9df5de0de9826ad1aa xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_01b4fda3de4a6a9df5de0de9826ad1aa /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_01b4fda3de4a6a9df5de0de9826ad1aa >Result: >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_e631f0bf543f6c06867077cd16aad9e2 --virtualsize 1048576K --name brick_e631f0bf543f6c06867077cd16aad9e2 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_e631f0bf543f6c06867077cd16aad9e2" created. >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 106.266µs >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_9f2b3abdf7a755d6b95c412c440a955f >Result: >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_01b4fda3de4a6a9df5de0de9826ad1aa/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_e631f0bf543f6c06867077cd16aad9e2 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_e631f0bf543f6c06867077cd16aad9e2 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 224.571µs >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2006 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_01b4fda3de4a6a9df5de0de9826ad1aa/brick >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 130.339µs >[kubeexec] DEBUG 2018/06/08 08:54:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_e631f0bf543f6c06867077cd16aad9e2 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e631f0bf543f6c06867077cd16aad9e2 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_01b4fda3de4a6a9df5de0de9826ad1aa/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_e631f0bf543f6c06867077cd16aad9e2 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e631f0bf543f6c06867077cd16aad9e2 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_9f2b3abdf7a755d6b95c412c440a955f --virtualsize 2097152K --name brick_9f2b3abdf7a755d6b95c412c440a955f >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_9f2b3abdf7a755d6b95c412c440a955f" created. >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f399afe93703292ed5ac22602d678d6f >Result: >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e631f0bf543f6c06867077cd16aad9e2/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_9f2b3abdf7a755d6b95c412c440a955f >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_9f2b3abdf7a755d6b95c412c440a955f isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_9f2b3abdf7a755d6b95c412c440a955f /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_9f2b3abdf7a755d6b95c412c440a955f xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_9f2b3abdf7a755d6b95c412c440a955f /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_9f2b3abdf7a755d6b95c412c440a955f >Result: >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_f399afe93703292ed5ac22602d678d6f --virtualsize 2097152K --name brick_f399afe93703292ed5ac22602d678d6f >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_f399afe93703292ed5ac22602d678d6f" created. >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_9f2b3abdf7a755d6b95c412c440a955f/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_f399afe93703292ed5ac22602d678d6f >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_f399afe93703292ed5ac22602d678d6f isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2006 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_9f2b3abdf7a755d6b95c412c440a955f/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_f399afe93703292ed5ac22602d678d6f /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f399afe93703292ed5ac22602d678d6f xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_9f2b3abdf7a755d6b95c412c440a955f/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:52 Creating volume vol_43194b98d83e8b61b376ccc54f79333a replica 3 >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 119.774µs >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_f399afe93703292ed5ac22602d678d6f /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f399afe93703292ed5ac22602d678d6f >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 100.228µs >[kubeexec] DEBUG 2018/06/08 08:54:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f399afe93703292ed5ac22602d678d6f/brick >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 84.819µs >[kubeexec] DEBUG 2018/06/08 08:54:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2007 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f399afe93703292ed5ac22602d678d6f/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_4f4d753d298c99eac492c32006c74484 > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_4f4d753d298c99eac492c32006c74484 >[kubeexec] DEBUG 2018/06/08 08:54:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f399afe93703292ed5ac22602d678d6f/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_e2e58ed3bab4af0c07b035a7306264ab >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 103.718µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 96.031µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 115.242µs >[kubeexec] DEBUG 2018/06/08 08:54:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_2ed96343627983cf9667b7cee4052d17 > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_2ed96343627983cf9667b7cee4052d17 >[kubeexec] DEBUG 2018/06/08 08:54:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_e2e58ed3bab4af0c07b035a7306264ab --virtualsize 2097152K --name brick_e2e58ed3bab4af0c07b035a7306264ab >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_e2e58ed3bab4af0c07b035a7306264ab" created. >[kubeexec] DEBUG 2018/06/08 08:54:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4f44e68452214ab813c4616ad12e2ee2 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_e2e58ed3bab4af0c07b035a7306264ab >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_e2e58ed3bab4af0c07b035a7306264ab isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_e2e58ed3bab4af0c07b035a7306264ab /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_e2e58ed3bab4af0c07b035a7306264ab xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_e2e58ed3bab4af0c07b035a7306264ab /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_e2e58ed3bab4af0c07b035a7306264ab >Result: >[kubeexec] DEBUG 2018/06/08 08:54:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_e2e58ed3bab4af0c07b035a7306264ab/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_d389f0278a774bd7443a09af960961d8/tp_4f44e68452214ab813c4616ad12e2ee2 --virtualsize 1048576K --name brick_4f44e68452214ab813c4616ad12e2ee2 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_4f44e68452214ab813c4616ad12e2ee2" created. >[kubeexec] DEBUG 2018/06/08 08:54:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2007 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_e2e58ed3bab4af0c07b035a7306264ab/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_4f44e68452214ab813c4616ad12e2ee2 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_4f44e68452214ab813c4616ad12e2ee2 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_e2e58ed3bab4af0c07b035a7306264ab/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:54 Creating volume vol_7d5a429f821efc8e8fe3f29569732b86 replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_4f44e68452214ab813c4616ad12e2ee2 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4f44e68452214ab813c4616ad12e2ee2 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 125.911µs >[kubeexec] DEBUG 2018/06/08 08:54:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_4f44e68452214ab813c4616ad12e2ee2 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4f44e68452214ab813c4616ad12e2ee2 >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 106.226µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 270.687µs >[kubeexec] DEBUG 2018/06/08 08:54:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4f44e68452214ab813c4616ad12e2ee2/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d105708cfed6c89ba77c7f9738020bf4 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_b8f78f128f20b61ef032ce9ee5b6481c replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e3ebda9db9a085114fe9732ab2df4869/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_c4423da2a93ab3c5917479262cf1d93a/brick 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_91d54591c1b07bc3a93d925cfbbd0f23/brick >Result: volume create: vol_b8f78f128f20b61ef032ce9ee5b6481c: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:54:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_3a4297677881963e3f80124971d50eea/tp_d105708cfed6c89ba77c7f9738020bf4 --virtualsize 1048576K --name brick_d105708cfed6c89ba77c7f9738020bf4 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_d105708cfed6c89ba77c7f9738020bf4" created. >[kubeexec] DEBUG 2018/06/08 08:54:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_c36c6529173a51e7b9ae7a98545d98a2 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d105708cfed6c89ba77c7f9738020bf4 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d105708cfed6c89ba77c7f9738020bf4 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d105708cfed6c89ba77c7f9738020bf4 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d105708cfed6c89ba77c7f9738020bf4 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d105708cfed6c89ba77c7f9738020bf4 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d105708cfed6c89ba77c7f9738020bf4 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d105708cfed6c89ba77c7f9738020bf4/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:55 Creating volume vol_aee5088d2304cb95535752ef85f9f392 replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_d389f0278a774bd7443a09af960961d8/tp_c36c6529173a51e7b9ae7a98545d98a2 --virtualsize 1048576K --name brick_c36c6529173a51e7b9ae7a98545d98a2 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_c36c6529173a51e7b9ae7a98545d98a2" created. >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 128.193µs >[kubeexec] DEBUG 2018/06/08 08:54:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_fcac0ae8bd9895d4780d78b18d3d6c38 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_c36c6529173a51e7b9ae7a98545d98a2 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_c36c6529173a51e7b9ae7a98545d98a2 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 120.909µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 118.045µs >[kubeexec] DEBUG 2018/06/08 08:54:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_c36c6529173a51e7b9ae7a98545d98a2 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_c36c6529173a51e7b9ae7a98545d98a2 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_c36c6529173a51e7b9ae7a98545d98a2 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_c36c6529173a51e7b9ae7a98545d98a2 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_c36c6529173a51e7b9ae7a98545d98a2/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_3a4297677881963e3f80124971d50eea/tp_fcac0ae8bd9895d4780d78b18d3d6c38 --virtualsize 1048576K --name brick_fcac0ae8bd9895d4780d78b18d3d6c38 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_fcac0ae8bd9895d4780d78b18d3d6c38" created. >[kubeexec] DEBUG 2018/06/08 08:54:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_1f46651d33f6ef49271d6d5382a7bc9c >Result: >[kubeexec] DEBUG 2018/06/08 08:54:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_fcac0ae8bd9895d4780d78b18d3d6c38 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_fcac0ae8bd9895d4780d78b18d3d6c38 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_fcac0ae8bd9895d4780d78b18d3d6c38 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_fcac0ae8bd9895d4780d78b18d3d6c38 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_fcac0ae8bd9895d4780d78b18d3d6c38 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_fcac0ae8bd9895d4780d78b18d3d6c38 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_1f46651d33f6ef49271d6d5382a7bc9c --virtualsize 1048576K --name brick_1f46651d33f6ef49271d6d5382a7bc9c >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_1f46651d33f6ef49271d6d5382a7bc9c" created. >[kubeexec] DEBUG 2018/06/08 08:54:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_fcac0ae8bd9895d4780d78b18d3d6c38/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:56 Creating volume vol_c1dce5388e8c136a89e4a25e4cc97821 replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_1f46651d33f6ef49271d6d5382a7bc9c >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_1f46651d33f6ef49271d6d5382a7bc9c isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f26cbb3df4880b3fe991ee4e44697c2c >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 106.861µs >[kubeexec] DEBUG 2018/06/08 08:54:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_1f46651d33f6ef49271d6d5382a7bc9c /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_1f46651d33f6ef49271d6d5382a7bc9c xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 114.744µs >[kubeexec] DEBUG 2018/06/08 08:54:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_1f46651d33f6ef49271d6d5382a7bc9c /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_1f46651d33f6ef49271d6d5382a7bc9c >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 127.526µs >[kubeexec] DEBUG 2018/06/08 08:54:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_1f46651d33f6ef49271d6d5382a7bc9c/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:54:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_3a4297677881963e3f80124971d50eea/tp_f26cbb3df4880b3fe991ee4e44697c2c --virtualsize 1048576K --name brick_f26cbb3df4880b3fe991ee4e44697c2c >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_f26cbb3df4880b3fe991ee4e44697c2c" created. >[kubeexec] DEBUG 2018/06/08 08:54:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_04daa5c9d25bc1a3074533508d73b587 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_f26cbb3df4880b3fe991ee4e44697c2c >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_f26cbb3df4880b3fe991ee4e44697c2c isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_f26cbb3df4880b3fe991ee4e44697c2c /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f26cbb3df4880b3fe991ee4e44697c2c xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_f26cbb3df4880b3fe991ee4e44697c2c /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f26cbb3df4880b3fe991ee4e44697c2c >Result: >[kubeexec] DEBUG 2018/06/08 08:54:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_d389f0278a774bd7443a09af960961d8/tp_04daa5c9d25bc1a3074533508d73b587 --virtualsize 1048576K --name brick_04daa5c9d25bc1a3074533508d73b587 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_04daa5c9d25bc1a3074533508d73b587" created. >[kubeexec] DEBUG 2018/06/08 08:54:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f26cbb3df4880b3fe991ee4e44697c2c/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:57 Creating volume vol_8757685f765bbd74556cbf75086c88f6 replica 3 >[kubeexec] DEBUG 2018/06/08 08:54:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_04daa5c9d25bc1a3074533508d73b587 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_04daa5c9d25bc1a3074533508d73b587 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_61815093958a51df17cf62e0a12a5451 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_04daa5c9d25bc1a3074533508d73b587 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_04daa5c9d25bc1a3074533508d73b587 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_04daa5c9d25bc1a3074533508d73b587 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_04daa5c9d25bc1a3074533508d73b587 >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 111.955µs >[kubeexec] DEBUG 2018/06/08 08:54:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_04daa5c9d25bc1a3074533508d73b587/brick >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 170.282µs >[kubeexec] DEBUG 2018/06/08 08:54:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_61815093958a51df17cf62e0a12a5451 --virtualsize 1048576K --name brick_61815093958a51df17cf62e0a12a5451 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_61815093958a51df17cf62e0a12a5451" created. >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 112.72µs >[kubeexec] DEBUG 2018/06/08 08:54:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_61815093958a51df17cf62e0a12a5451 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_61815093958a51df17cf62e0a12a5451 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:54:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_61815093958a51df17cf62e0a12a5451 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_61815093958a51df17cf62e0a12a5451 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:54:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_61815093958a51df17cf62e0a12a5451 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_61815093958a51df17cf62e0a12a5451 >Result: >[kubeexec] DEBUG 2018/06/08 08:54:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_61815093958a51df17cf62e0a12a5451/brick >Result: >[cmdexec] INFO 2018/06/08 08:54:58 Creating volume vol_a8678c97e2708cf6e00aea160a4d46a0 replica 3 >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 131.53µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 145.019µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 95.504µs >[kubeexec] DEBUG 2018/06/08 08:54:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_b8f78f128f20b61ef032ce9ee5b6481c >Result: volume start: vol_b8f78f128f20b61ef032ce9ee5b6481c: success >[heketi] INFO 2018/06/08 08:54:59 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:54:59 asynchttp.go:292: Completed job 086aebe6792ba08c072791f3d9745777 in 36.641172345s >[kubeexec] DEBUG 2018/06/08 08:54:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume delete vol_ad1e5849e9566f1bcaa09cfb9c0b96ef >Result: volume delete: vol_ad1e5849e9566f1bcaa09cfb9c0b96ef: success >[heketi] INFO 2018/06/08 08:54:59 Deleting brick d9145089dd3be60b9df2a82315900670 >[heketi] INFO 2018/06/08 08:54:59 Deleting brick 14353da20ffb37550480dff24b915066 >[heketi] INFO 2018/06/08 08:54:59 Deleting brick e416f1b7fd62ee9320cfd9d57705d34c >[kubeexec] DEBUG 2018/06/08 08:54:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_07ef5105131fa51c35a9007ee213ea7a replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_063b0d12615d3375614df036642dfa39/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4dc091ed7622300b60597d7a49fae798/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6bfb8b3bea31f51d5be26acc7e81ce9a/brick >Result: volume create: vol_07ef5105131fa51c35a9007ee213ea7a: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:54:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_89ebb1e7eed2ff557488996a1657e75e replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_24c8f78999c94f4eacb087ac1fc563d1/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6c8f55013bee289cd516ef97746721ff/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_c77746b3040810cf2ae10fc585c67fd5/brick >Result: volume create: vol_89ebb1e7eed2ff557488996a1657e75e: success: please start the volume to access data >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 144.063µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 102.058µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 104.34µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 107.944µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 114.488µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 116.707µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 118.675µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 116.963µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 117.116µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 114.838µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 179.066µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 162.098µs >[kubeexec] DEBUG 2018/06/08 08:55:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume create vol_a6be87754541710c38b420381c76fb8c replica 3 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_cb9dab914105ca3d691abed1d53d7df9/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_47b16a0d2bab878899e523ab33a0c258/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_cd0ffc6bd88bdb1d969dceaafda517d8/brick >Result: volume create: vol_a6be87754541710c38b420381c76fb8c: success: please start the volume to access data >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 99.549µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 125.637µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 131.681µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 147.356µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 130.643µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 110.062µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 167.903µs >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[heketi] ERROR 2018/06/08 08:55:05 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:399: Pending brick 214eb9006f9103530a1d0310d5f5dcfc can not be deleted >[negroni] Completed 500 Internal Server Error in 4.920195ms >[heketi] ERROR 2018/06/08 08:55:05 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1167: Delete Volume Build Failed: The target exists, contains other items, or is in use. >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 186.835µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 103.4µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 131.235µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 145.098µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 148.187µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 118.561µs >[kubeexec] DEBUG 2018/06/08 08:55:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_07ef5105131fa51c35a9007ee213ea7a >Result: volume start: vol_07ef5105131fa51c35a9007ee213ea7a: success >[heketi] INFO 2018/06/08 08:55:07 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:55:07 asynchttp.go:292: Completed job 252b1a3974a7d2a66ea36f37f2c82074 in 59.809013655s >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 120.422µs >[kubeexec] DEBUG 2018/06/08 08:55:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_cc2483d7b49bc029b5200a024cac7535 >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 141.784µs >[kubeexec] DEBUG 2018/06/08 08:55:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_214eb9006f9103530a1d0310d5f5dcfc > >Result: Logical volume "tp_214eb9006f9103530a1d0310d5f5dcfc" successfully removed >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 175.178µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 116.33µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 166.033µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 200.009µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 120.013µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 127.211µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 177.995µs >[kubeexec] DEBUG 2018/06/08 08:55:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_89ebb1e7eed2ff557488996a1657e75e >Result: volume start: vol_89ebb1e7eed2ff557488996a1657e75e: success >[kubeexec] DEBUG 2018/06/08 08:55:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume start vol_a6be87754541710c38b420381c76fb8c >Result: volume start: vol_a6be87754541710c38b420381c76fb8c: success >[kubeexec] ERROR 2018/06/08 08:55:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_15e0122e942fc41f80666a3714670682] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_15e0122e942fc41f80666a3714670682: failed: Volume vol_15e0122e942fc41f80666a3714670682 does not exist >] >[cmdexec] ERROR 2018/06/08 08:55:10 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_15e0122e942fc41f80666a3714670682: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_15e0122e942fc41f80666a3714670682: failed: Volume vol_15e0122e942fc41f80666a3714670682 does not exist >[heketi] INFO 2018/06/08 08:55:10 Deleting brick c805e58953c8aa4c8e4d7298563713f7 >[heketi] INFO 2018/06/08 08:55:10 Deleting brick 30ba742d27672a5289fe7ff6bd5ef3ce >[heketi] INFO 2018/06/08 08:55:10 Deleting brick 2aca658dfb3ac9ba0fbf538dad4caa3b >[heketi] INFO 2018/06/08 08:55:10 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:55:10 asynchttp.go:292: Completed job 7e56f206e49c37c88b54444c74b73010 in 48.082857251s >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 104.761µs >[heketi] INFO 2018/06/08 08:55:10 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:55:10 asynchttp.go:292: Completed job d5734675939ff78001cf479cb6e9b97c in 1m2.74765779s >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 146.997µs >[kubeexec] DEBUG 2018/06/08 08:55:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_f05028077b974ebc1f6621aee2184169 > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_f05028077b974ebc1f6621aee2184169 >[kubeexec] DEBUG 2018/06/08 08:55:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_1bb49691eccf328025855a91ee8cbc66 > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_1bb49691eccf328025855a91ee8cbc66 >[kubeexec] DEBUG 2018/06/08 08:55:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_1918777ef3ce84df17c8a114fb89f33e replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_cdc8c0d24f68026f06c17385d2fd0029/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_9be3d8f4748b4c09541f8a9de90c4c34/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_cecca27e94262fbaae81106a8cfeb0b2/brick >Result: volume create: vol_1918777ef3ce84df17c8a114fb89f33e: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:55:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_daf13b7f607d1b280c78e909af25a215 replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_d8a382627ef730cbe0664d9439fd6a59/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_0a07a24688dfe13d4e1726b8d05e24a7/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bd4fcfec1db80add56b57e18bd681378/brick >Result: volume create: vol_daf13b7f607d1b280c78e909af25a215: success: please start the volume to access data >[kubeexec] ERROR 2018/06/08 08:55:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_b99532640a5201d243193159ee762ae4] on glusterfs-storage-vsh2m: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_b99532640a5201d243193159ee762ae4: failed: Volume vol_b99532640a5201d243193159ee762ae4 does not exist >] >[cmdexec] ERROR 2018/06/08 08:55:11 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_b99532640a5201d243193159ee762ae4: Unable to execute command on glusterfs-storage-vsh2m: volume delete: vol_b99532640a5201d243193159ee762ae4: failed: Volume vol_b99532640a5201d243193159ee762ae4 does not exist >[heketi] INFO 2018/06/08 08:55:11 Deleting brick dad9ada287c446b0011af8dc964060e1 >[heketi] INFO 2018/06/08 08:55:11 Deleting brick 3dbab6cd698d01ea7f00dbd81329643a >[heketi] INFO 2018/06/08 08:55:11 Deleting brick df5559fdf15f6372e19e4e9f8bc1f129 >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 118.28µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 162.966µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 116.533µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 120.348µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 132.792µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 145.415µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 115.45µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 111.765µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 98.91µs >[heketi] INFO 2018/06/08 08:55:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:55:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 288.71µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 178.546µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 304.063µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 178.715µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 170.135µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 120.621µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 150.035µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 134.461µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 138.435µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 194.316µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 282.575µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 188.817µs >[kubeexec] DEBUG 2018/06/08 08:55:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_1918777ef3ce84df17c8a114fb89f33e >Result: volume start: vol_1918777ef3ce84df17c8a114fb89f33e: success >[heketi] INFO 2018/06/08 08:55:18 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:55:18 asynchttp.go:292: Completed job 963dc2b509a04cdd5db1a8531a35c01c in 55.946016992s >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 195.974µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 174.331µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 132.456µs >[kubeexec] DEBUG 2018/06/08 08:55:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_966a92ed3e4374a5e634f7b133c49e52 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[kubeexec] DEBUG 2018/06/08 08:55:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume create vol_dcaf25d0becadd0bb3c732c2c2ca27da replica 3 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_aec48b6cbce6be7256a2b86efac6aef6/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_26379a9db79930ad00a37453b492ebac/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_abd8495bfd886d541a449967f7bf70c0/brick >Result: volume create: vol_dcaf25d0becadd0bb3c732c2c2ca27da: success: please start the volume to access data >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 185.178µs >[kubeexec] DEBUG 2018/06/08 08:55:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_daf13b7f607d1b280c78e909af25a215 >Result: volume start: vol_daf13b7f607d1b280c78e909af25a215: success >[heketi] INFO 2018/06/08 08:55:19 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:55:19 asynchttp.go:292: Completed job 0f488f564115ab81bd476de5a18002e4 in 56.985428996s >[kubeexec] DEBUG 2018/06/08 08:55:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_afef5f4a84abec3ab3e4f2a5bae2db23 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d985a824b73813069566b0a71bb73269/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_ac27bcfa769ceba759b678d9e6b751cb/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dddf81e8d43f425cc26134166b21f15e/brick >Result: volume create: vol_afef5f4a84abec3ab3e4f2a5bae2db23: success: please start the volume to access data >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 170.884µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 262.991µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 130.572µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 118.065µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 157.103µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 121.83µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 306.621µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 124.22µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 181.783µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 203.867µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 121.877µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 142.647µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 127.974µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 107.543µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 222.764µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 168.796µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 168.145µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 187.573µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 213.724µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 121.328µs >[kubeexec] DEBUG 2018/06/08 08:55:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume start vol_dcaf25d0becadd0bb3c732c2c2ca27da >Result: volume start: vol_dcaf25d0becadd0bb3c732c2c2ca27da: success >[heketi] INFO 2018/06/08 08:55:26 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:55:26 asynchttp.go:292: Completed job 12c7a0bf9d0844701651eb57cc4883ef in 1m3.441820767s >[kubeexec] DEBUG 2018/06/08 08:55:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script snapshot list vol_a4a6e4892da299f6c5634b8a2def697e --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 224.548µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 255.718µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 144.213µs >[kubeexec] DEBUG 2018/06/08 08:55:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_337bf2c01bf8c45eec5bab53ad5c2e46 replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_8d39164e3ebba9d10c8b6ed33dafabc7/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c740e50db586c869e27205aee2820dd7/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f88d16c3bc0200c12aa9db817d4d9f0a/brick >Result: volume create: vol_337bf2c01bf8c45eec5bab53ad5c2e46: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:55:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_afef5f4a84abec3ab3e4f2a5bae2db23 >Result: volume start: vol_afef5f4a84abec3ab3e4f2a5bae2db23: success >[heketi] INFO 2018/06/08 08:55:27 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:55:27 asynchttp.go:292: Completed job d4f0d63988cfe91663765df353eac613 in 1m4.467266546s >[kubeexec] DEBUG 2018/06/08 08:55:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume create vol_f73270331b95278f490fd1dfe0b010df replica 3 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_b35881161d7bf25bfa4af1bea4746d58/brick 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_a84d4290cbebd984dd11b88eb806d93b/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_4fd98fc9e1c8a606b4b5c70bb555597e/brick >Result: volume create: vol_f73270331b95278f490fd1dfe0b010df: success: please start the volume to access data >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 165.743µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 204.554µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 120.69µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 125.426µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 187.605µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 137.069µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 128.481µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 155.288µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 104.437µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 105.932µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 111.095µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 161.972µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 205.824µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 124.15µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 201.456µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 240.145µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 186.712µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 206.741µs >[kubeexec] DEBUG 2018/06/08 08:55:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_337bf2c01bf8c45eec5bab53ad5c2e46 >Result: volume start: vol_337bf2c01bf8c45eec5bab53ad5c2e46: success >[heketi] INFO 2018/06/08 08:55:33 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:55:33 asynchttp.go:292: Completed job 44bb2a0e030ba450c0c64afe4b039c77 in 1m10.815126551s >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 196.138µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 119.357µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 134.449µs >[kubeexec] DEBUG 2018/06/08 08:55:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_1ef58be42cf8ea7cf9298cff303e903a replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2bba219914cb29c5994021b1731b7763/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_8546c963be0cfb030d57435e94a50d61/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_bf4ebc8a0c81383f3dc357f04e4903c1/brick >Result: volume create: vol_1ef58be42cf8ea7cf9298cff303e903a: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:55:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume start vol_f73270331b95278f490fd1dfe0b010df >Result: volume start: vol_f73270331b95278f490fd1dfe0b010df: success >[heketi] INFO 2018/06/08 08:55:34 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:55:34 asynchttp.go:292: Completed job addeaaa11160c5240019710f9906a045 in 1m11.857123265s >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 167.59µs >[kubeexec] DEBUG 2018/06/08 08:55:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_b7fb8f19c77039982909b8868f7af4cc replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_30eb22cad2c3536d381b8d71943cffd2/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_df153caad54b6511504d5af0cdc4e2b5/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_54258f15bd3df0dfb8421196bdea9774/brick >Result: volume create: vol_b7fb8f19c77039982909b8868f7af4cc: success: please start the volume to access data >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 118.983µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 155.068µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 154.238µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 158.863µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 136.192µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 169.546µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 177.893µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 165.326µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 126.061µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 161.877µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 101.66µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 199.211µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 133.166µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 271.564µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 192.558µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 126.1µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 121.431µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 245.351µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 167.236µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 163.382µs >[kubeexec] DEBUG 2018/06/08 08:55:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_1ef58be42cf8ea7cf9298cff303e903a >Result: volume start: vol_1ef58be42cf8ea7cf9298cff303e903a: success >[heketi] INFO 2018/06/08 08:55:41 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:55:41 asynchttp.go:292: Completed job e437810dbd1e27db60605a0a68708d27 in 1m18.177355593s >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 241.418µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 155.613µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 129.149µs >[kubeexec] DEBUG 2018/06/08 08:55:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_b7fb8f19c77039982909b8868f7af4cc >Result: volume start: vol_b7fb8f19c77039982909b8868f7af4cc: success >[kubeexec] DEBUG 2018/06/08 08:55:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume create vol_9fb2830da79dd70d910dad8426dc236f replica 3 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9963c5d95158f58c8fc2f28d81d925ea/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_a1150dfd2cad12bad69ef9cc830da83d/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_322eeb9ae02c4f9e338567566394d2c7/brick >Result: volume create: vol_9fb2830da79dd70d910dad8426dc236f: success: please start the volume to access data >[heketi] INFO 2018/06/08 08:55:42 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:55:42 asynchttp.go:292: Completed job be6e7eb7be5d4288a84770ab21a780d4 in 1m19.133261738s >[kubeexec] DEBUG 2018/06/08 08:55:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_1a828ff2310b778d09ffdadd755dc5ee replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_b2827681c0e1655380b4d5d53226bf7e/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_6319c734fbf95bbbfb1156b24dcc43ea/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_6167bec47d9258cf5b2a5e3a48c0d391/brick >Result: volume create: vol_1a828ff2310b778d09ffdadd755dc5ee: success: please start the volume to access data >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 160.295µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 193.814µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 124.874µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 144.07µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 128.483µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 202.371µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 207.658µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 184.569µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 109.467µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 196.865µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 166.012µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 136.916µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 159.022µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 181.765µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 103.95µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 322.001µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 227.828µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 158.466µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 199.348µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 193.598µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 180.815µs >[kubeexec] DEBUG 2018/06/08 08:55:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_e25438438fd2d50a0b07f26b4bfb338a replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_02dbc9628a1cd8a5f20d9fea7a615dcc/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_e1709c1db8499b885817e5f4682c56ca/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_a69c00298f148d2208ea7eab0903a3d2/brick >Result: volume create: vol_e25438438fd2d50a0b07f26b4bfb338a: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:55:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume start vol_9fb2830da79dd70d910dad8426dc236f >Result: volume start: vol_9fb2830da79dd70d910dad8426dc236f: success >[kubeexec] DEBUG 2018/06/08 08:55:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_1a828ff2310b778d09ffdadd755dc5ee >Result: volume start: vol_1a828ff2310b778d09ffdadd755dc5ee: success >[heketi] INFO 2018/06/08 08:55:49 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:55:49 asynchttp.go:292: Completed job 9a79c05867577fc3b0a49e88846f7d5d in 1m26.73862831s >[heketi] INFO 2018/06/08 08:55:49 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:55:49 asynchttp.go:292: Completed job 3addf5576b966f87a33743f58885b381 in 1m26.79415314s >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 108.368µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 212.585µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 140.222µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 196.628µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 258.441µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 115.811µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 125.886µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 187.265µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 132.571µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 138.895µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 172.063µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 124.661µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 184.385µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 145.928µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 165.68µs >[kubeexec] DEBUG 2018/06/08 08:55:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume create vol_21481d8911fe8ec238d97d71c1aa5cb3 replica 3 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_906b174d0f419aa3d1a9affe4675674a/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d1e261647e36a4dd323508df5b3decff/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_da658ca7b4961a716b886b57a67ff000/brick >Result: volume create: vol_21481d8911fe8ec238d97d71c1aa5cb3: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:55:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_e25438438fd2d50a0b07f26b4bfb338a >Result: volume start: vol_e25438438fd2d50a0b07f26b4bfb338a: success >[kubeexec] DEBUG 2018/06/08 08:55:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_dc9ab13a25ccbad8262fba92766a31f9 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_211fca2005300a58cf3b464b89e9deea/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_3ca7a79bfe1ee21131ef20732eac93d7/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4d4160003ee228514a59c2c61932ceed/brick >Result: volume create: vol_dc9ab13a25ccbad8262fba92766a31f9: success: please start the volume to access data >[heketi] INFO 2018/06/08 08:55:54 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:55:54 asynchttp.go:292: Completed job cba4a2a4640269f19e7dad88f97689c6 in 1m31.021919512s >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 222.056µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 121.84µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 166.625µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 202.462µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 172.651µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 111.836µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 181.152µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 184.553µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 136.105µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 177.905µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 216.398µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 185.884µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 148.124µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 142.769µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 116.638µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 132.05µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 225.772µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 107.552µs >[kubeexec] DEBUG 2018/06/08 08:56:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume start vol_21481d8911fe8ec238d97d71c1aa5cb3 >Result: volume start: vol_21481d8911fe8ec238d97d71c1aa5cb3: success >[heketi] INFO 2018/06/08 08:56:00 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:56:00 asynchttp.go:292: Completed job 85bc22f147332f528f993604639f4b29 in 1m37.333423104s >[kubeexec] DEBUG 2018/06/08 08:56:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d9145089dd3be60b9df2a82315900670 | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d9145089dd3be60b9df2a82315900670 >[kubeexec] DEBUG 2018/06/08 08:56:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c805e58953c8aa4c8e4d7298563713f7 | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_c805e58953c8aa4c8e4d7298563713f7 >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 211.438µs >[kubeexec] DEBUG 2018/06/08 08:56:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_f05028077b974ebc1f6621aee2184169 >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 174.203µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 115.168µs >[kubeexec] DEBUG 2018/06/08 08:56:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_1bb49691eccf328025855a91ee8cbc66 >Result: >[kubeexec] DEBUG 2018/06/08 08:56:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dad9ada287c446b0011af8dc964060e1 | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dad9ada287c446b0011af8dc964060e1 >[kubeexec] DEBUG 2018/06/08 08:56:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 32min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ18967 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > ââ18968 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:56:01 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:56:01 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:56:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_226838416791f3286fcacb7e5f1ff59d replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0a39774f01ac7959661ff5afff2595ab/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_592a78c302b01c3ee9538269481405e7/brick 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_11a59fea08d9b732822c2669ddaf54fa/brick >Result: volume create: vol_226838416791f3286fcacb7e5f1ff59d: success: please start the volume to access data >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 135.352µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 121.526µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 87.679µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 177.595µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 140.882µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 95.23µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 145.234µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 181.125µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 185.162µs >[kubeexec] DEBUG 2018/06/08 08:56:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_dc9ab13a25ccbad8262fba92766a31f9 >Result: volume start: vol_dc9ab13a25ccbad8262fba92766a31f9: success >[kubeexec] DEBUG 2018/06/08 08:56:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume stop vol_a4a6e4892da299f6c5634b8a2def697e force >Result: volume stop: vol_a4a6e4892da299f6c5634b8a2def697e: success >[heketi] INFO 2018/06/08 08:56:04 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:56:04 asynchttp.go:292: Completed job 07364f7e79063d4523983c360e41ce2a in 1m41.643590333s >[kubeexec] DEBUG 2018/06/08 08:56:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d9145089dd3be60b9df2a82315900670 > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_d9145089dd3be60b9df2a82315900670 >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 119.251µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 182.882µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 121.566µs >[kubeexec] DEBUG 2018/06/08 08:56:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_c805e58953c8aa4c8e4d7298563713f7 > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_c805e58953c8aa4c8e4d7298563713f7 >[kubeexec] DEBUG 2018/06/08 08:56:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_f05028077b974ebc1f6621aee2184169/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:56:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_1bb49691eccf328025855a91ee8cbc66/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:56:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dad9ada287c446b0011af8dc964060e1 > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_dad9ada287c446b0011af8dc964060e1 >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 129.898µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 265.755µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 204.972µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 126.038µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 115.93µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 76.648µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 179.113µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 132.441µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 79.38µs >[kubeexec] DEBUG 2018/06/08 08:56:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume delete vol_a4a6e4892da299f6c5634b8a2def697e >Result: volume delete: vol_a4a6e4892da299f6c5634b8a2def697e: success >[heketi] INFO 2018/06/08 08:56:08 Deleting brick 9775522b5bb908520213f17390c26d53 >[heketi] INFO 2018/06/08 08:56:08 Deleting brick b38349024b5350e969179b72b5c2af7c >[heketi] INFO 2018/06/08 08:56:08 Deleting brick bf9bfa3d8464d1e5476516d891583f10 >[kubeexec] DEBUG 2018/06/08 08:56:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_226838416791f3286fcacb7e5f1ff59d >Result: volume start: vol_226838416791f3286fcacb7e5f1ff59d: success >[kubeexec] DEBUG 2018/06/08 08:56:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_d21ed39ae095af2674175a798a0cb02c replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b4e462e2cc3dbccfd86e44f907dc7f00/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_dde70bf2e86fac7df0e0427af9bf5db3/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_2a97d96e94e0ff7c49d2b6d81dfbd8fd/brick >Result: volume create: vol_d21ed39ae095af2674175a798a0cb02c: success: please start the volume to access data >[heketi] INFO 2018/06/08 08:56:08 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:56:08 asynchttp.go:292: Completed job b6fa7484737d1ad70ee715a59b9488ee in 1m45.609457708s >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 159.846µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 219.083µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 100.762µs >[kubeexec] DEBUG 2018/06/08 08:56:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d9145089dd3be60b9df2a82315900670 >Result: >[kubeexec] DEBUG 2018/06/08 08:56:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c805e58953c8aa4c8e4d7298563713f7 >Result: >[kubeexec] DEBUG 2018/06/08 08:56:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_f05028077b974ebc1f6621aee2184169 > >Result: Logical volume "brick_f05028077b974ebc1f6621aee2184169" successfully removed >[kubeexec] DEBUG 2018/06/08 08:56:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_1bb49691eccf328025855a91ee8cbc66 > >Result: Logical volume "brick_1bb49691eccf328025855a91ee8cbc66" successfully removed >[kubeexec] DEBUG 2018/06/08 08:56:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dad9ada287c446b0011af8dc964060e1 >Result: >[kubeexec] DEBUG 2018/06/08 08:56:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9775522b5bb908520213f17390c26d53 | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_9775522b5bb908520213f17390c26d53 >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 158.122µs >[kubeexec] DEBUG 2018/06/08 08:56:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_d9145089dd3be60b9df2a82315900670/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 131.774µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 145.159µs >[kubeexec] DEBUG 2018/06/08 08:56:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_c805e58953c8aa4c8e4d7298563713f7/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:56:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_f05028077b974ebc1f6621aee2184169 > >Result: 0 >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 109.763µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 158.683µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 73.65µs >[kubeexec] DEBUG 2018/06/08 08:56:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_1bb49691eccf328025855a91ee8cbc66 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:56:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_dad9ada287c446b0011af8dc964060e1/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 125.148µs >[kubeexec] DEBUG 2018/06/08 08:56:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_9775522b5bb908520213f17390c26d53 > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_9775522b5bb908520213f17390c26d53 >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 100.406µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 94.522µs >[kubeexec] ERROR 2018/06/08 08:56:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 08:56:12 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] DEBUG 2018/06/08 08:56:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_d21ed39ae095af2674175a798a0cb02c >Result: volume start: vol_d21ed39ae095af2674175a798a0cb02c: success >[heketi] INFO 2018/06/08 08:56:12 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:56:12 asynchttp.go:292: Completed job feaab31cecfc86532b5845f95cf63913 in 1m49.028640227s >[kubeexec] DEBUG 2018/06/08 08:56:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_d9145089dd3be60b9df2a82315900670 > >Result: Logical volume "brick_d9145089dd3be60b9df2a82315900670" successfully removed >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 116.039µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 124.969µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 71.95µs >[kubeexec] ERROR 2018/06/08 08:56:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4f4d753d298c99eac492c32006c74484] on glusterfs-storage-gxp7c: Err[command terminated with exit code 32]: Stdout []: Stderr [umount: /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4f4d753d298c99eac492c32006c74484: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >] >[cmdexec] ERROR 2018/06/08 08:56:13 /src/github.com/heketi/heketi/executors/cmdexec/brick.go:151: <nil> >[kubeexec] DEBUG 2018/06/08 08:56:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_5d0dfb0ebb846fcd225c890ec9cdb885 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_b3918aae89c9aab7c5cfa3496f95936a/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_c8dd6b39719a6bc75ea331fae5a92396/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dba8827a8c1b642d1b34bc5cf35aa4b4/brick >Result: volume create: vol_5d0dfb0ebb846fcd225c890ec9cdb885: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:56:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_c805e58953c8aa4c8e4d7298563713f7 > >Result: Logical volume "brick_c805e58953c8aa4c8e4d7298563713f7" successfully removed >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 118.937µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 127.517µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 90.374µs >[kubeexec] DEBUG 2018/06/08 08:56:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_2ed96343627983cf9667b7cee4052d17 >Result: >[kubeexec] DEBUG 2018/06/08 08:56:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_f05028077b974ebc1f6621aee2184169 > >Result: Logical volume "tp_f05028077b974ebc1f6621aee2184169" successfully removed >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 123.566µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 115.524µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 255.379µs >[kubeexec] DEBUG 2018/06/08 08:56:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_1bb49691eccf328025855a91ee8cbc66 > >Result: Logical volume "tp_1bb49691eccf328025855a91ee8cbc66" successfully removed >[kubeexec] DEBUG 2018/06/08 08:56:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_dad9ada287c446b0011af8dc964060e1 > >Result: Logical volume "brick_dad9ada287c446b0011af8dc964060e1" successfully removed >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 150.41µs >[kubeexec] DEBUG 2018/06/08 08:56:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_5d0dfb0ebb846fcd225c890ec9cdb885 >Result: volume start: vol_5d0dfb0ebb846fcd225c890ec9cdb885: success >[heketi] INFO 2018/06/08 08:56:15 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:56:15 asynchttp.go:292: Completed job 2d92a9e5a1719ab40f22667eb62b0d09 in 1m52.610923338s >[kubeexec] DEBUG 2018/06/08 08:56:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9775522b5bb908520213f17390c26d53 >Result: >[kubeexec] DEBUG 2018/06/08 08:56:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_8757685f765bbd74556cbf75086c88f6 replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e1c7bf2add37671886c54c55f98f9fb7/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_1f46651d33f6ef49271d6d5382a7bc9c/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_f26cbb3df4880b3fe991ee4e44697c2c/brick >Result: volume create: vol_8757685f765bbd74556cbf75086c88f6: success: please start the volume to access data >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 145.338µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 66.176µs >[kubeexec] DEBUG 2018/06/08 08:56:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_d9145089dd3be60b9df2a82315900670 > >Result: 0 >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 121.243µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 85.946µs >[negroni] Completed 200 OK in 243.249µs >[kubeexec] DEBUG 2018/06/08 08:56:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_c805e58953c8aa4c8e4d7298563713f7 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:56:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_f05028077b974ebc1f6621aee2184169 >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 177.828µs >[kubeexec] DEBUG 2018/06/08 08:56:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_1bb49691eccf328025855a91ee8cbc66 >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 153.239µs >[negroni] Completed 200 OK in 211.145µs >[kubeexec] DEBUG 2018/06/08 08:56:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_dad9ada287c446b0011af8dc964060e1 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:56:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_9775522b5bb908520213f17390c26d53/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 128.848µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 119.34µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 117.5µs >[kubeexec] DEBUG 2018/06/08 08:56:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_d9145089dd3be60b9df2a82315900670 > >Result: Logical volume "tp_d9145089dd3be60b9df2a82315900670" successfully removed >[kubeexec] DEBUG 2018/06/08 08:56:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_c805e58953c8aa4c8e4d7298563713f7 > >Result: Logical volume "tp_c805e58953c8aa4c8e4d7298563713f7" successfully removed >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 122.889µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 182.776µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 74.261µs >[kubeexec] DEBUG 2018/06/08 08:56:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_dad9ada287c446b0011af8dc964060e1 > >Result: Logical volume "tp_dad9ada287c446b0011af8dc964060e1" successfully removed >[kubeexec] DEBUG 2018/06/08 08:56:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_8757685f765bbd74556cbf75086c88f6 >Result: volume start: vol_8757685f765bbd74556cbf75086c88f6: success >[kubeexec] DEBUG 2018/06/08 08:56:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_43194b98d83e8b61b376ccc54f79333a replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_9f2b3abdf7a755d6b95c412c440a955f/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e54cabdc56fef4e5b4a11c7a72eaff3d/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_01b4fda3de4a6a9df5de0de9826ad1aa/brick >Result: volume create: vol_43194b98d83e8b61b376ccc54f79333a: success: please start the volume to access data >[heketi] INFO 2018/06/08 08:56:20 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:56:20 asynchttp.go:292: Completed job 1b65bda5683c59bd435d17b7a832a860 in 1m56.763194326s >[kubeexec] DEBUG 2018/06/08 08:56:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_9775522b5bb908520213f17390c26d53 > >Result: Logical volume "brick_9775522b5bb908520213f17390c26d53" successfully removed >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 122.977µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 149.138µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 60.369µs >[kubeexec] DEBUG 2018/06/08 08:56:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_d9145089dd3be60b9df2a82315900670 >Result: >[kubeexec] DEBUG 2018/06/08 08:56:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_c805e58953c8aa4c8e4d7298563713f7 >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 135.114µs >[kubeexec] DEBUG 2018/06/08 08:56:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_dad9ada287c446b0011af8dc964060e1 >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 168.605µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 113.312µs >[kubeexec] DEBUG 2018/06/08 08:56:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_9775522b5bb908520213f17390c26d53 > >Result: 0 >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 111.85µs >[kubeexec] DEBUG 2018/06/08 08:56:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_9775522b5bb908520213f17390c26d53 > >Result: Logical volume "tp_9775522b5bb908520213f17390c26d53" successfully removed >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 107.161µs >[negroni] Completed 200 OK in 80.291µs >[kubeexec] DEBUG 2018/06/08 08:56:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9775522b5bb908520213f17390c26d53 >Result: >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 170.426µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 117.107µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 308.439µs >[kubeexec] DEBUG 2018/06/08 08:56:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_43194b98d83e8b61b376ccc54f79333a >Result: volume start: vol_43194b98d83e8b61b376ccc54f79333a: success >[kubeexec] DEBUG 2018/06/08 08:56:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_a8678c97e2708cf6e00aea160a4d46a0 replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_e631f0bf543f6c06867077cd16aad9e2/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_04daa5c9d25bc1a3074533508d73b587/brick 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_61815093958a51df17cf62e0a12a5451/brick >Result: volume create: vol_a8678c97e2708cf6e00aea160a4d46a0: success: please start the volume to access data >[heketi] INFO 2018/06/08 08:56:24 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:56:24 asynchttp.go:292: Completed job 5a4d86d3903a860d822a06fd74343b52 in 2m1.266596563s >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 223.741µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 125.118µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 61.636µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 109.914µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 92.639µs >[negroni] Completed 200 OK in 182.443µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 188.832µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 186.766µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 209.532µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 170.156µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 181.688µs >[negroni] Completed 200 OK in 147.872µs >[kubeexec] DEBUG 2018/06/08 08:56:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_7d5a429f821efc8e8fe3f29569732b86 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_e2e58ed3bab4af0c07b035a7306264ab/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_68704fc0eb854fe2a0157b0261302792/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f399afe93703292ed5ac22602d678d6f/brick >Result: volume create: vol_7d5a429f821efc8e8fe3f29569732b86: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:56:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_a8678c97e2708cf6e00aea160a4d46a0 >Result: volume start: vol_a8678c97e2708cf6e00aea160a4d46a0: success >[heketi] INFO 2018/06/08 08:56:28 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:56:28 asynchttp.go:292: Completed job fe707ace9f825f3ad19b3b8f28410ae4 in 2m4.467558584s >[kubeexec] DEBUG 2018/06/08 08:56:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e416f1b7fd62ee9320cfd9d57705d34c | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e416f1b7fd62ee9320cfd9d57705d34c >[kubeexec] DEBUG 2018/06/08 08:56:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_30ba742d27672a5289fe7ff6bd5ef3ce | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_30ba742d27672a5289fe7ff6bd5ef3ce >[kubeexec] DEBUG 2018/06/08 08:56:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_df5559fdf15f6372e19e4e9f8bc1f129 | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_df5559fdf15f6372e19e4e9f8bc1f129 >[kubeexec] DEBUG 2018/06/08 08:56:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b38349024b5350e969179b72b5c2af7c | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_b38349024b5350e969179b72b5c2af7c >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 131.578µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 155.013µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 93.949µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 261.184µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 149.706µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 93.012µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 205.352µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 147.733µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 76.161µs >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 200 OK in 123.801µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 122.264µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 162.439µs >[kubeexec] ERROR 2018/06/08 08:56:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 08:56:32 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:56:32 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:56:32 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] DEBUG 2018/06/08 08:56:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_7d5a429f821efc8e8fe3f29569732b86 >Result: volume start: vol_7d5a429f821efc8e8fe3f29569732b86: success >[asynchttp] INFO 2018/06/08 08:56:32 asynchttp.go:292: Completed job 35a61a66b97e9b1b5514e032177179ad in 2m8.769856325s >[heketi] ERROR 2018/06/08 08:56:32 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] INFO 2018/06/08 08:56:32 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:56:32 asynchttp.go:292: Completed job edf46a25868eb4962cc099193bc77737 in 2m9.019646217s >[kubeexec] DEBUG 2018/06/08 08:56:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_4f4d753d298c99eac492c32006c74484/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:56:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_2ed96343627983cf9667b7cee4052d17/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:56:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_aee5088d2304cb95535752ef85f9f392 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d105708cfed6c89ba77c7f9738020bf4/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_ad542929ce4fd8719fbe5fc44df98dbd/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_4f44e68452214ab813c4616ad12e2ee2/brick >Result: volume create: vol_aee5088d2304cb95535752ef85f9f392: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:56:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e416f1b7fd62ee9320cfd9d57705d34c > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_e416f1b7fd62ee9320cfd9d57705d34c >[negroni] Started GET /queue/35a61a66b97e9b1b5514e032177179ad >[negroni] Completed 500 Internal Server Error in 126.956µs >[kubeexec] DEBUG 2018/06/08 08:56:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_30ba742d27672a5289fe7ff6bd5ef3ce > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_30ba742d27672a5289fe7ff6bd5ef3ce >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 112.627µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 133.469µs >[kubeexec] DEBUG 2018/06/08 08:56:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_df5559fdf15f6372e19e4e9f8bc1f129 > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_df5559fdf15f6372e19e4e9f8bc1f129 >[kubeexec] DEBUG 2018/06/08 08:56:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_b38349024b5350e969179b72b5c2af7c > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_b38349024b5350e969179b72b5c2af7c >[kubeexec] DEBUG 2018/06/08 08:56:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_2ed96343627983cf9667b7cee4052d17 > >Result: Logical volume "brick_2ed96343627983cf9667b7cee4052d17" successfully removed >[kubeexec] DEBUG 2018/06/08 08:56:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e416f1b7fd62ee9320cfd9d57705d34c >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 113.34µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 82.503µs >[kubeexec] DEBUG 2018/06/08 08:56:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_30ba742d27672a5289fe7ff6bd5ef3ce >Result: >[kubeexec] DEBUG 2018/06/08 08:56:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_df5559fdf15f6372e19e4e9f8bc1f129 >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 127.813µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 81.702µs >[kubeexec] DEBUG 2018/06/08 08:56:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b38349024b5350e969179b72b5c2af7c >Result: >[kubeexec] DEBUG 2018/06/08 08:56:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_2ed96343627983cf9667b7cee4052d17 > >Result: 0 >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 20.187996ms >[asynchttp] INFO 2018/06/08 08:56:35 asynchttp.go:288: Started job b391b58714366385bc9a2411624b4aa9 >[heketi] INFO 2018/06/08 08:56:35 Started async operation: Delete Volume >[negroni] Started GET /queue/b391b58714366385bc9a2411624b4aa9 >[negroni] Completed 200 OK in 106.076µs >[kubeexec] DEBUG 2018/06/08 08:56:35 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_e416f1b7fd62ee9320cfd9d57705d34c/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 130.735µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 93.528µs >[kubeexec] DEBUG 2018/06/08 08:56:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_aee5088d2304cb95535752ef85f9f392 >Result: volume start: vol_aee5088d2304cb95535752ef85f9f392: success >[heketi] INFO 2018/06/08 08:56:36 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:56:36 asynchttp.go:292: Completed job 01bc6bf544472a829302d04d4df8b0d1 in 2m12.644002075s >[kubeexec] DEBUG 2018/06/08 08:56:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_30ba742d27672a5289fe7ff6bd5ef3ce/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/b391b58714366385bc9a2411624b4aa9 >[negroni] Completed 200 OK in 148.545µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 112.699µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 77.146µs >[kubeexec] DEBUG 2018/06/08 08:56:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_df5559fdf15f6372e19e4e9f8bc1f129/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:56:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_c1dce5388e8c136a89e4a25e4cc97821 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_fcac0ae8bd9895d4780d78b18d3d6c38/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_1039ed81d7d98ea183e9c7d8d00c1b6d/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_c36c6529173a51e7b9ae7a98545d98a2/brick >Result: volume create: vol_c1dce5388e8c136a89e4a25e4cc97821: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:56:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_b38349024b5350e969179b72b5c2af7c/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/b391b58714366385bc9a2411624b4aa9 >[negroni] Completed 200 OK in 143.018µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 166.725µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 296.011µs >[kubeexec] DEBUG 2018/06/08 08:56:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_2ed96343627983cf9667b7cee4052d17 > >Result: Logical volume "tp_2ed96343627983cf9667b7cee4052d17" successfully removed >[negroni] Started GET /queue/b391b58714366385bc9a2411624b4aa9 >[negroni] Completed 200 OK in 160.788µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 130.94µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 158.996µs >[kubeexec] DEBUG 2018/06/08 08:56:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] DEBUG 2018/06/08 08:56:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_c1dce5388e8c136a89e4a25e4cc97821 >Result: volume start: vol_c1dce5388e8c136a89e4a25e4cc97821: success >[heketi] INFO 2018/06/08 08:56:39 Create Volume succeeded >[asynchttp] INFO 2018/06/08 08:56:39 asynchttp.go:292: Completed job a85d28ab7c8834b5c41247d658312b8f in 2m15.922834213s >[kubeexec] DEBUG 2018/06/08 08:56:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_14353da20ffb37550480dff24b915066 | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_14353da20ffb37550480dff24b915066 >[kubeexec] DEBUG 2018/06/08 08:56:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e416f1b7fd62ee9320cfd9d57705d34c > >Result: Logical volume "brick_e416f1b7fd62ee9320cfd9d57705d34c" successfully removed >[kubeexec] DEBUG 2018/06/08 08:56:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_cc2483d7b49bc029b5200a024cac7535/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/b391b58714366385bc9a2411624b4aa9 >[negroni] Completed 200 OK in 124.621µs >[kubeexec] DEBUG 2018/06/08 08:56:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_30ba742d27672a5289fe7ff6bd5ef3ce > >Result: Logical volume "brick_30ba742d27672a5289fe7ff6bd5ef3ce" successfully removed >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 103.952µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 57.353µs >[kubeexec] DEBUG 2018/06/08 08:56:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_214eb9006f9103530a1d0310d5f5dcfc >Result: >[heketi] ERROR 2018/06/08 08:56:40 /src/github.com/heketi/heketi/apps/glusterfs/brick_create.go:73: Unable to execute command on glusterfs-storage-gxp7c: umount: /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4f4d753d298c99eac492c32006c74484: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[heketi] ERROR 2018/06/08 08:56:40 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:532: Unable to delete bricks: Unable to execute command on glusterfs-storage-gxp7c: umount: /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4f4d753d298c99eac492c32006c74484: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[heketi] ERROR 2018/06/08 08:56:40 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to execute command on glusterfs-storage-gxp7c: umount: /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4f4d753d298c99eac492c32006c74484: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[asynchttp] INFO 2018/06/08 08:56:40 asynchttp.go:292: Completed job 20e932645b6c4b3853e7a46917e686e7 in 2m32.540224696s >[heketi] ERROR 2018/06/08 08:56:40 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to execute command on glusterfs-storage-gxp7c: umount: /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_4f4d753d298c99eac492c32006c74484: target is busy. > (In some cases useful info about processes that use > the device is found by lsof(8) or fuser(1)) >[kubeexec] DEBUG 2018/06/08 08:56:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_df5559fdf15f6372e19e4e9f8bc1f129 > >Result: Logical volume "brick_df5559fdf15f6372e19e4e9f8bc1f129" successfully removed >[negroni] Started GET /queue/b391b58714366385bc9a2411624b4aa9 >[negroni] Completed 200 OK in 109.423µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 133.304µs >[negroni] Completed 200 OK in 91.339µs >[kubeexec] DEBUG 2018/06/08 08:56:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2aca658dfb3ac9ba0fbf538dad4caa3b | cut -d" " -f1 >Result: /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2aca658dfb3ac9ba0fbf538dad4caa3b >[kubeexec] DEBUG 2018/06/08 08:56:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_b38349024b5350e969179b72b5c2af7c > >Result: Logical volume "brick_b38349024b5350e969179b72b5c2af7c" successfully removed >[negroni] Started GET /queue/b391b58714366385bc9a2411624b4aa9 >[negroni] Completed 200 OK in 157.414µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 138.64µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 106.151µs >[kubeexec] DEBUG 2018/06/08 08:56:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3dbab6cd698d01ea7f00dbd81329643a | cut -d" " -f1 >Result: /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_3dbab6cd698d01ea7f00dbd81329643a >[kubeexec] DEBUG 2018/06/08 08:56:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_2ed96343627983cf9667b7cee4052d17 >Result: >[negroni] Started GET /queue/b391b58714366385bc9a2411624b4aa9 >[negroni] Completed 200 OK in 113.897µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 99.841µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 91.832µs >[negroni] Started GET /queue/b391b58714366385bc9a2411624b4aa9 >[negroni] Completed 200 OK in 128.328µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 110.681µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 132.833µs >[negroni] Started GET /queue/b391b58714366385bc9a2411624b4aa9 >[negroni] Completed 200 OK in 222.242µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 117.625µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 97.063µs >[negroni] Started GET /queue/b391b58714366385bc9a2411624b4aa9 >[negroni] Completed 200 OK in 194.573µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 163.955µs >[negroni] Completed 200 OK in 88.158µs >[kubeexec] ERROR 2018/06/08 08:56:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 08:56:46 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] DEBUG 2018/06/08 08:56:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume stop vol_966a92ed3e4374a5e634f7b133c49e52 force >Result: volume stop: vol_966a92ed3e4374a5e634f7b133c49e52: success >[kubeexec] DEBUG 2018/06/08 08:56:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 33min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ19926 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > ââ19927 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:56:46 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:56:46 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:56:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_e416f1b7fd62ee9320cfd9d57705d34c > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:56:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bf9bfa3d8464d1e5476516d891583f10 | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_bf9bfa3d8464d1e5476516d891583f10 >[kubeexec] DEBUG 2018/06/08 08:56:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_30ba742d27672a5289fe7ff6bd5ef3ce > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:56:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_14353da20ffb37550480dff24b915066 > >Result: vg_3a4297677881963e3f80124971d50eea/tp_14353da20ffb37550480dff24b915066 >[kubeexec] DEBUG 2018/06/08 08:56:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_df5559fdf15f6372e19e4e9f8bc1f129 > >Result: 0 >[negroni] Started GET /queue/b391b58714366385bc9a2411624b4aa9 >[negroni] Completed 200 OK in 122.338µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 94.662µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 152.393µs >[kubeexec] DEBUG 2018/06/08 08:56:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_cc2483d7b49bc029b5200a024cac7535 > >Result: Logical volume "brick_cc2483d7b49bc029b5200a024cac7535" successfully removed >[kubeexec] DEBUG 2018/06/08 08:56:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_b38349024b5350e969179b72b5c2af7c > >Result: 0 >[negroni] Started GET /queue/b391b58714366385bc9a2411624b4aa9 >[negroni] Completed 200 OK in 166.266µs >[kubeexec] DEBUG 2018/06/08 08:56:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2aca658dfb3ac9ba0fbf538dad4caa3b > >Result: vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_2aca658dfb3ac9ba0fbf538dad4caa3b >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 114.399µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 101.103µs >[kubeexec] ERROR 2018/06/08 08:56:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 08:56:48 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:56:48 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:56:48 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:56:48 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 08:56:48 asynchttp.go:292: Completed job b391b58714366385bc9a2411624b4aa9 in 12.481115401s >[kubeexec] DEBUG 2018/06/08 08:56:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_3dbab6cd698d01ea7f00dbd81329643a > >Result: vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_3dbab6cd698d01ea7f00dbd81329643a >[negroni] Started GET /queue/b391b58714366385bc9a2411624b4aa9 >[negroni] Completed 500 Internal Server Error in 130.091µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 108.158µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 72.236µs >[kubeexec] DEBUG 2018/06/08 08:56:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 31min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ17962 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > ââ17963 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 08:56:49 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:56:49 Cleaned 0 nodes from health cache >[kubeexec] DEBUG 2018/06/08 08:56:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume delete vol_966a92ed3e4374a5e634f7b133c49e52 >Result: volume delete: vol_966a92ed3e4374a5e634f7b133c49e52: success >[heketi] INFO 2018/06/08 08:56:49 Deleting brick 00c2d7c5fda2de77f930386434235209 >[heketi] INFO 2018/06/08 08:56:49 Deleting brick 0fb75c3a027f88c53839c0f9578a1801 >[heketi] INFO 2018/06/08 08:56:49 Deleting brick 3fce0ef044cb1f17096a5bf85437e0db >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 138.792µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 132.119µs >[kubeexec] DEBUG 2018/06/08 08:56:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_e416f1b7fd62ee9320cfd9d57705d34c > >Result: Logical volume "tp_e416f1b7fd62ee9320cfd9d57705d34c" successfully removed >[kubeexec] DEBUG 2018/06/08 08:56:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_bf9bfa3d8464d1e5476516d891583f10 > >Result: vg_3a4297677881963e3f80124971d50eea/tp_bf9bfa3d8464d1e5476516d891583f10 >[kubeexec] DEBUG 2018/06/08 08:56:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_3fce0ef044cb1f17096a5bf85437e0db | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_3fce0ef044cb1f17096a5bf85437e0db >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 125.379µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 93.386µs >[kubeexec] DEBUG 2018/06/08 08:56:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_30ba742d27672a5289fe7ff6bd5ef3ce > >Result: Logical volume "tp_30ba742d27672a5289fe7ff6bd5ef3ce" successfully removed >[kubeexec] DEBUG 2018/06/08 08:56:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_14353da20ffb37550480dff24b915066 >Result: >[kubeexec] DEBUG 2018/06/08 08:56:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_3fce0ef044cb1f17096a5bf85437e0db > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_3fce0ef044cb1f17096a5bf85437e0db >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 100.751µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 67.529µs >[kubeexec] DEBUG 2018/06/08 08:56:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_df5559fdf15f6372e19e4e9f8bc1f129 > >Result: Logical volume "tp_df5559fdf15f6372e19e4e9f8bc1f129" successfully removed >[kubeexec] DEBUG 2018/06/08 08:56:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_cc2483d7b49bc029b5200a024cac7535 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:56:52 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_3fce0ef044cb1f17096a5bf85437e0db >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 168.998µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 136.326µs >[kubeexec] DEBUG 2018/06/08 08:56:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_b38349024b5350e969179b72b5c2af7c > >Result: Logical volume "tp_b38349024b5350e969179b72b5c2af7c" successfully removed >[kubeexec] DEBUG 2018/06/08 08:56:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2aca658dfb3ac9ba0fbf538dad4caa3b >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 131.67µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 73.707µs >[kubeexec] DEBUG 2018/06/08 08:56:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_3fce0ef044cb1f17096a5bf85437e0db/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:56:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0fb75c3a027f88c53839c0f9578a1801 | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_0fb75c3a027f88c53839c0f9578a1801 >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 120.73µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 103.074µs >[kubeexec] DEBUG 2018/06/08 08:56:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3dbab6cd698d01ea7f00dbd81329643a >Result: >[kubeexec] DEBUG 2018/06/08 08:56:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_3fce0ef044cb1f17096a5bf85437e0db > >Result: Logical volume "brick_3fce0ef044cb1f17096a5bf85437e0db" successfully removed >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 152.702µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 111.423µs >[kubeexec] DEBUG 2018/06/08 08:56:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e416f1b7fd62ee9320cfd9d57705d34c >Result: >[kubeexec] DEBUG 2018/06/08 08:56:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_00c2d7c5fda2de77f930386434235209 | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_00c2d7c5fda2de77f930386434235209 >[kubeexec] DEBUG 2018/06/08 08:56:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_3fce0ef044cb1f17096a5bf85437e0db > >Result: 0 >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 168.876µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 99.571µs >[kubeexec] DEBUG 2018/06/08 08:56:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_30ba742d27672a5289fe7ff6bd5ef3ce >Result: >[kubeexec] DEBUG 2018/06/08 08:56:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bf9bfa3d8464d1e5476516d891583f10 >Result: >[kubeexec] DEBUG 2018/06/08 08:56:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_3fce0ef044cb1f17096a5bf85437e0db > >Result: Logical volume "tp_3fce0ef044cb1f17096a5bf85437e0db" successfully removed >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 128.679µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 98.699µs >[kubeexec] DEBUG 2018/06/08 08:56:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_df5559fdf15f6372e19e4e9f8bc1f129 >Result: >[kubeexec] DEBUG 2018/06/08 08:56:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_14353da20ffb37550480dff24b915066/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 166.426µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 200.813µs >[kubeexec] DEBUG 2018/06/08 08:56:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_3fce0ef044cb1f17096a5bf85437e0db >Result: >[kubeexec] DEBUG 2018/06/08 08:56:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_b38349024b5350e969179b72b5c2af7c >Result: >[kubeexec] DEBUG 2018/06/08 08:56:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_cc2483d7b49bc029b5200a024cac7535 > >Result: Logical volume "tp_cc2483d7b49bc029b5200a024cac7535" successfully removed >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 130.335µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 96.302µs >[kubeexec] DEBUG 2018/06/08 08:57:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_0fb75c3a027f88c53839c0f9578a1801 > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_0fb75c3a027f88c53839c0f9578a1801 >[kubeexec] DEBUG 2018/06/08 08:57:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_2aca658dfb3ac9ba0fbf538dad4caa3b/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:57:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0fb75c3a027f88c53839c0f9578a1801 >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 159.139µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 136.475µs >[kubeexec] DEBUG 2018/06/08 08:57:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_3dbab6cd698d01ea7f00dbd81329643a/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:57:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_0fb75c3a027f88c53839c0f9578a1801/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 125.984µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 75.996µs >[kubeexec] DEBUG 2018/06/08 08:57:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_00c2d7c5fda2de77f930386434235209 > >Result: vg_3a4297677881963e3f80124971d50eea/tp_00c2d7c5fda2de77f930386434235209 >[kubeexec] DEBUG 2018/06/08 08:57:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_0fb75c3a027f88c53839c0f9578a1801 > >Result: Logical volume "brick_0fb75c3a027f88c53839c0f9578a1801" successfully removed >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 91.307µs >[negroni] Completed 200 OK in 389.813µs >[kubeexec] DEBUG 2018/06/08 08:57:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_bf9bfa3d8464d1e5476516d891583f10/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:57:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_0fb75c3a027f88c53839c0f9578a1801 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:57:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_14353da20ffb37550480dff24b915066 > >Result: Logical volume "brick_14353da20ffb37550480dff24b915066" successfully removed >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 119.056µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 92.233µs >[kubeexec] DEBUG 2018/06/08 08:57:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_0fb75c3a027f88c53839c0f9578a1801 > >Result: Logical volume "tp_0fb75c3a027f88c53839c0f9578a1801" successfully removed >[kubeexec] DEBUG 2018/06/08 08:57:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_cc2483d7b49bc029b5200a024cac7535 >Result: >[heketi] INFO 2018/06/08 08:57:04 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:57:04 Creating brick bd498e4984809a277823f95d7ac56f3a >[heketi] INFO 2018/06/08 08:57:04 Creating brick f6df34957c593ea7de0d475a1ccbded6 >[heketi] INFO 2018/06/08 08:57:04 Creating brick e9b7f209aa8449a1dc89d03f1f8aa3ed >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 112.166µs >[negroni] Completed 200 OK in 359.668µs >[kubeexec] DEBUG 2018/06/08 08:57:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_0fb75c3a027f88c53839c0f9578a1801 >Result: >[kubeexec] DEBUG 2018/06/08 08:57:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_2aca658dfb3ac9ba0fbf538dad4caa3b > >Result: Logical volume "brick_2aca658dfb3ac9ba0fbf538dad4caa3b" successfully removed >[kubeexec] DEBUG 2018/06/08 08:57:05 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f6df34957c593ea7de0d475a1ccbded6 >Result: >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 17.132662ms >[asynchttp] INFO 2018/06/08 08:57:05 asynchttp.go:288: Started job e21c37500d0d0bb1407abc0ab8b08a7e >[heketi] INFO 2018/06/08 08:57:05 Started async operation: Delete Volume >[negroni] Started GET /queue/e21c37500d0d0bb1407abc0ab8b08a7e >[negroni] Completed 200 OK in 118.332µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 168.052µs >[negroni] Completed 200 OK in 135.926µs >[kubeexec] DEBUG 2018/06/08 08:57:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_f6df34957c593ea7de0d475a1ccbded6 --virtualsize 2097152K --name brick_f6df34957c593ea7de0d475a1ccbded6 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_f6df34957c593ea7de0d475a1ccbded6" created. >[kubeexec] DEBUG 2018/06/08 08:57:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e9b7f209aa8449a1dc89d03f1f8aa3ed >Result: >[kubeexec] DEBUG 2018/06/08 08:57:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_f6df34957c593ea7de0d475a1ccbded6 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_f6df34957c593ea7de0d475a1ccbded6 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:57:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_f6df34957c593ea7de0d475a1ccbded6 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f6df34957c593ea7de0d475a1ccbded6 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:57:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_3dbab6cd698d01ea7f00dbd81329643a > >Result: Logical volume "brick_3dbab6cd698d01ea7f00dbd81329643a" successfully removed >[kubeexec] DEBUG 2018/06/08 08:57:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_f6df34957c593ea7de0d475a1ccbded6 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f6df34957c593ea7de0d475a1ccbded6 >Result: >[kubeexec] DEBUG 2018/06/08 08:57:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_e9b7f209aa8449a1dc89d03f1f8aa3ed --virtualsize 2097152K --name brick_e9b7f209aa8449a1dc89d03f1f8aa3ed >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_e9b7f209aa8449a1dc89d03f1f8aa3ed" created. >[kubeexec] DEBUG 2018/06/08 08:57:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f6df34957c593ea7de0d475a1ccbded6/brick >Result: >[negroni] Started GET /queue/e21c37500d0d0bb1407abc0ab8b08a7e >[negroni] Completed 200 OK in 154.503µs >[kubeexec] DEBUG 2018/06/08 08:57:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2002 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f6df34957c593ea7de0d475a1ccbded6/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e9b7f209aa8449a1dc89d03f1f8aa3ed >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e9b7f209aa8449a1dc89d03f1f8aa3ed isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:57:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f6df34957c593ea7de0d475a1ccbded6/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_00c2d7c5fda2de77f930386434235209 >Result: >[kubeexec] DEBUG 2018/06/08 08:57:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e9b7f209aa8449a1dc89d03f1f8aa3ed /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e9b7f209aa8449a1dc89d03f1f8aa3ed xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 159.352µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 116.352µs >[kubeexec] DEBUG 2018/06/08 08:57:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_e9b7f209aa8449a1dc89d03f1f8aa3ed /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e9b7f209aa8449a1dc89d03f1f8aa3ed >Result: >[kubeexec] DEBUG 2018/06/08 08:57:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e9b7f209aa8449a1dc89d03f1f8aa3ed/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2002 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e9b7f209aa8449a1dc89d03f1f8aa3ed/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_bf9bfa3d8464d1e5476516d891583f10 > >Result: Logical volume "brick_bf9bfa3d8464d1e5476516d891583f10" successfully removed >[kubeexec] DEBUG 2018/06/08 08:57:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e9b7f209aa8449a1dc89d03f1f8aa3ed/brick >Result: >[negroni] Started GET /queue/e21c37500d0d0bb1407abc0ab8b08a7e >[negroni] Completed 200 OK in 134.271µs >[kubeexec] DEBUG 2018/06/08 08:57:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_14353da20ffb37550480dff24b915066 > >Result: 0 >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 105.398µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 58.299µs >[kubeexec] DEBUG 2018/06/08 08:57:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] DEBUG 2018/06/08 08:57:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bd498e4984809a277823f95d7ac56f3a >Result: >[negroni] Started GET /queue/e21c37500d0d0bb1407abc0ab8b08a7e >[negroni] Completed 200 OK in 183.018µs >[kubeexec] ERROR 2018/06/08 08:57:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 08:57:09 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] DEBUG 2018/06/08 08:57:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_bd498e4984809a277823f95d7ac56f3a --virtualsize 2097152K --name brick_bd498e4984809a277823f95d7ac56f3a >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_bd498e4984809a277823f95d7ac56f3a" created. >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 97.834µs >[negroni] Completed 200 OK in 198.583µs >[kubeexec] DEBUG 2018/06/08 08:57:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_bd498e4984809a277823f95d7ac56f3a >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_bd498e4984809a277823f95d7ac56f3a isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:57:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_bd498e4984809a277823f95d7ac56f3a /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bd498e4984809a277823f95d7ac56f3a xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:57:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_bd498e4984809a277823f95d7ac56f3a /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bd498e4984809a277823f95d7ac56f3a >Result: >[kubeexec] ERROR 2018/06/08 08:57:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 08:57:09 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:57:09 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:57:09 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:57:09 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 08:57:09 asynchttp.go:292: Completed job e21c37500d0d0bb1407abc0ab8b08a7e in 3.670301234s >[kubeexec] DEBUG 2018/06/08 08:57:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bd498e4984809a277823f95d7ac56f3a/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2002 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bd498e4984809a277823f95d7ac56f3a/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bd498e4984809a277823f95d7ac56f3a/brick >Result: >[cmdexec] INFO 2018/06/08 08:57:09 Creating volume vol_9f9fb17746d0aad637b132875d2744e5 replica 3 >[negroni] Started GET /queue/e21c37500d0d0bb1407abc0ab8b08a7e >[negroni] Completed 500 Internal Server Error in 116.502µs >[kubeexec] DEBUG 2018/06/08 08:57:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_2aca658dfb3ac9ba0fbf538dad4caa3b > >Result: 0 >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 141.891µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 116.01µs >[kubeexec] DEBUG 2018/06/08 08:57:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_3dbab6cd698d01ea7f00dbd81329643a > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:57:10 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_00c2d7c5fda2de77f930386434235209/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 08:57:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_bf9bfa3d8464d1e5476516d891583f10 > >Result: 0 >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 226.095µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 223.012µs >[kubeexec] DEBUG 2018/06/08 08:57:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_14353da20ffb37550480dff24b915066 > >Result: Logical volume "tp_14353da20ffb37550480dff24b915066" successfully removed >[kubeexec] DEBUG 2018/06/08 08:57:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_9f9fb17746d0aad637b132875d2744e5 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_bd498e4984809a277823f95d7ac56f3a/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_e9b7f209aa8449a1dc89d03f1f8aa3ed/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_f6df34957c593ea7de0d475a1ccbded6/brick >Result: volume create: vol_9f9fb17746d0aad637b132875d2744e5: success: please start the volume to access data >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 91.057µs >[negroni] Completed 200 OK in 825.018µs >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 132.608µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 62.554µs >[heketi] INFO 2018/06/08 08:57:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:57:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 111.076µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 76.999µs >[kubeexec] DEBUG 2018/06/08 08:57:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_9f9fb17746d0aad637b132875d2744e5 >Result: volume start: vol_9f9fb17746d0aad637b132875d2744e5: success >[kubeexec] DEBUG 2018/06/08 08:57:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 33min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20060 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > ââ20061 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > ââ20076 /bin/bash /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh --volname=vol_9f9fb17746d0aad637b132875d2744e5 --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd > ââ20078 /bin/bash /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh --volname=vol_9f9fb17746d0aad637b132875d2744e5 --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd > ââ20079 /bin/bash /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh --volname=vol_9f9fb17746d0aad637b132875d2744e5 --first=no --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd > ââ20080 grep user.cifs /var/lib/glusterd/vols/vol_9f9fb17746d0aad637b132875d2744e5/info > ââ20081 cut -d= -f2 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:57:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:57:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[asynchttp] INFO 2018/06/08 08:57:14 asynchttp.go:292: Completed job d6cbe9a27af546f3c96e74f32e86cd27 in 7m21.17200539s >[kubeexec] DEBUG 2018/06/08 08:57:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_2aca658dfb3ac9ba0fbf538dad4caa3b > >Result: Logical volume "tp_2aca658dfb3ac9ba0fbf538dad4caa3b" successfully removed >[kubeexec] DEBUG 2018/06/08 08:57:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_3dbab6cd698d01ea7f00dbd81329643a > >Result: Logical volume "tp_3dbab6cd698d01ea7f00dbd81329643a" successfully removed >[kubeexec] DEBUG 2018/06/08 08:57:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_00c2d7c5fda2de77f930386434235209 > >Result: Logical volume "brick_00c2d7c5fda2de77f930386434235209" successfully removed >[kubeexec] DEBUG 2018/06/08 08:57:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_bf9bfa3d8464d1e5476516d891583f10 > >Result: Logical volume "tp_bf9bfa3d8464d1e5476516d891583f10" successfully removed >[kubeexec] DEBUG 2018/06/08 08:57:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_14353da20ffb37550480dff24b915066 >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 105.119µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 92.291µs >[heketi] INFO 2018/06/08 08:57:15 Allocating brick set #0 >[kubeexec] DEBUG 2018/06/08 08:57:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 33min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20425 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > ââ20426 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:57:15 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:57:15 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[heketi] INFO 2018/06/08 08:57:15 Creating brick 7649722a655922fad2e75ee7af97c506 >[heketi] INFO 2018/06/08 08:57:15 Creating brick 94745514333fff7f22e8d60eff97d635 >[heketi] INFO 2018/06/08 08:57:15 Creating brick a70d481dd9566d346b4d60131fb23a9c >[kubeexec] DEBUG 2018/06/08 08:57:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_2aca658dfb3ac9ba0fbf538dad4caa3b >Result: >[heketi] INFO 2018/06/08 08:57:15 Allocating brick set #0 >[kubeexec] DEBUG 2018/06/08 08:57:15 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 31min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18430 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > ââ18431 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 08:57:15 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:57:15 Cleaned 0 nodes from health cache >[heketi] INFO 2018/06/08 08:57:16 Creating brick d11702f97397089b0224af9ef7a122f1 >[heketi] INFO 2018/06/08 08:57:16 Creating brick b6a2ea41f37c6396a2e85a6a41faa54f >[heketi] INFO 2018/06/08 08:57:16 Creating brick 070c89e7be12165f271c2906ae9e74b7 >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 109.701µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 74.12µs >[kubeexec] DEBUG 2018/06/08 08:57:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_a70d481dd9566d346b4d60131fb23a9c >Result: >[kubeexec] DEBUG 2018/06/08 08:57:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_a70d481dd9566d346b4d60131fb23a9c --virtualsize 2097152K --name brick_a70d481dd9566d346b4d60131fb23a9c >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_a70d481dd9566d346b4d60131fb23a9c" created. >[kubeexec] DEBUG 2018/06/08 08:57:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_3dbab6cd698d01ea7f00dbd81329643a >Result: >[heketi] INFO 2018/06/08 08:57:16 Allocating brick set #0 >[heketi] INFO 2018/06/08 08:57:16 Creating brick 81fe2c773a661172f7014e85c2f68184 >[heketi] INFO 2018/06/08 08:57:16 Creating brick 9049906389b8fcca23da885bdc1ce8ab >[heketi] INFO 2018/06/08 08:57:16 Creating brick e4aaed99528f9d907b84f92e740f0da4 >[kubeexec] DEBUG 2018/06/08 08:57:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_a70d481dd9566d346b4d60131fb23a9c >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_a70d481dd9566d346b4d60131fb23a9c isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:57:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_a70d481dd9566d346b4d60131fb23a9c /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_a70d481dd9566d346b4d60131fb23a9c xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:57:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_94745514333fff7f22e8d60eff97d635 >Result: >[kubeexec] DEBUG 2018/06/08 08:57:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_a70d481dd9566d346b4d60131fb23a9c /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_a70d481dd9566d346b4d60131fb23a9c >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 106.154µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 99.818µs >[kubeexec] DEBUG 2018/06/08 08:57:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_a70d481dd9566d346b4d60131fb23a9c/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2000 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_a70d481dd9566d346b4d60131fb23a9c/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_94745514333fff7f22e8d60eff97d635 --virtualsize 2097152K --name brick_94745514333fff7f22e8d60eff97d635 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_94745514333fff7f22e8d60eff97d635" created. >[kubeexec] DEBUG 2018/06/08 08:57:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_a70d481dd9566d346b4d60131fb23a9c/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_00c2d7c5fda2de77f930386434235209 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 08:57:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_94745514333fff7f22e8d60eff97d635 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_94745514333fff7f22e8d60eff97d635 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:57:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_94745514333fff7f22e8d60eff97d635 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_94745514333fff7f22e8d60eff97d635 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:57:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_94745514333fff7f22e8d60eff97d635 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_94745514333fff7f22e8d60eff97d635 >Result: >[kubeexec] DEBUG 2018/06/08 08:57:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_94745514333fff7f22e8d60eff97d635/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_b6a2ea41f37c6396a2e85a6a41faa54f >Result: >[kubeexec] DEBUG 2018/06/08 08:57:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2000 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_94745514333fff7f22e8d60eff97d635/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_94745514333fff7f22e8d60eff97d635/brick >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 200 OK in 160.668µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 129.412µs >[kubeexec] DEBUG 2018/06/08 08:57:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_bf9bfa3d8464d1e5476516d891583f10 >Result: >[heketi] INFO 2018/06/08 08:57:18 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:57:18 asynchttp.go:292: Completed job 30a446e1b3f0ca388030dee9d6823363 in 2m42.319785934s >[kubeexec] DEBUG 2018/06/08 08:57:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_b6a2ea41f37c6396a2e85a6a41faa54f --virtualsize 2097152K --name brick_b6a2ea41f37c6396a2e85a6a41faa54f >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_b6a2ea41f37c6396a2e85a6a41faa54f" created. >[kubeexec] DEBUG 2018/06/08 08:57:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_b6a2ea41f37c6396a2e85a6a41faa54f >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_b6a2ea41f37c6396a2e85a6a41faa54f isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:57:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_b6a2ea41f37c6396a2e85a6a41faa54f /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_b6a2ea41f37c6396a2e85a6a41faa54f xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:57:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_b6a2ea41f37c6396a2e85a6a41faa54f /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_b6a2ea41f37c6396a2e85a6a41faa54f >Result: >[kubeexec] DEBUG 2018/06/08 08:57:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_070c89e7be12165f271c2906ae9e74b7 >Result: >[kubeexec] DEBUG 2018/06/08 08:57:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_b6a2ea41f37c6396a2e85a6a41faa54f/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_7649722a655922fad2e75ee7af97c506 >Result: >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2001 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_b6a2ea41f37c6396a2e85a6a41faa54f/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_b6a2ea41f37c6396a2e85a6a41faa54f/brick >Result: >[negroni] Started GET /queue/30a446e1b3f0ca388030dee9d6823363 >[negroni] Completed 204 No Content in 99.683µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 98.67µs >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_070c89e7be12165f271c2906ae9e74b7 --virtualsize 2097152K --name brick_070c89e7be12165f271c2906ae9e74b7 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_070c89e7be12165f271c2906ae9e74b7" created. >[negroni] Started DELETE /volumes/a4a6e4892da299f6c5634b8a2def697e >[negroni] Completed 404 Not Found in 13.671837ms >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_070c89e7be12165f271c2906ae9e74b7 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_070c89e7be12165f271c2906ae9e74b7 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9049906389b8fcca23da885bdc1ce8ab >Result: >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_7649722a655922fad2e75ee7af97c506 --virtualsize 2097152K --name brick_7649722a655922fad2e75ee7af97c506 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_7649722a655922fad2e75ee7af97c506" created. >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_070c89e7be12165f271c2906ae9e74b7 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_070c89e7be12165f271c2906ae9e74b7 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_7649722a655922fad2e75ee7af97c506 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_7649722a655922fad2e75ee7af97c506 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_070c89e7be12165f271c2906ae9e74b7 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_070c89e7be12165f271c2906ae9e74b7 >Result: >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_7649722a655922fad2e75ee7af97c506 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_7649722a655922fad2e75ee7af97c506 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_070c89e7be12165f271c2906ae9e74b7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_9049906389b8fcca23da885bdc1ce8ab --virtualsize 2097152K --name brick_9049906389b8fcca23da885bdc1ce8ab >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_9049906389b8fcca23da885bdc1ce8ab" created. >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2001 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_070c89e7be12165f271c2906ae9e74b7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_7649722a655922fad2e75ee7af97c506 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_7649722a655922fad2e75ee7af97c506 >Result: >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_9049906389b8fcca23da885bdc1ce8ab >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_9049906389b8fcca23da885bdc1ce8ab isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_7649722a655922fad2e75ee7af97c506/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_070c89e7be12165f271c2906ae9e74b7/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_9049906389b8fcca23da885bdc1ce8ab /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9049906389b8fcca23da885bdc1ce8ab xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2000 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_7649722a655922fad2e75ee7af97c506/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_81fe2c773a661172f7014e85c2f68184 >Result: >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_7649722a655922fad2e75ee7af97c506/brick >Result: >[cmdexec] INFO 2018/06/08 08:57:20 Creating volume vol_ad1e5849e9566f1bcaa09cfb9c0b96ef replica 3 >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_9049906389b8fcca23da885bdc1ce8ab /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9049906389b8fcca23da885bdc1ce8ab >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 111.168µs >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d11702f97397089b0224af9ef7a122f1 >Result: >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9049906389b8fcca23da885bdc1ce8ab/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chown :2003 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9049906389b8fcca23da885bdc1ce8ab/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_81fe2c773a661172f7014e85c2f68184 --virtualsize 2097152K --name brick_81fe2c773a661172f7014e85c2f68184 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_81fe2c773a661172f7014e85c2f68184" created. >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: chmod 2775 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9049906389b8fcca23da885bdc1ce8ab/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_81fe2c773a661172f7014e85c2f68184 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_81fe2c773a661172f7014e85c2f68184 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_3a4297677881963e3f80124971d50eea/tp_d11702f97397089b0224af9ef7a122f1 --virtualsize 2097152K --name brick_d11702f97397089b0224af9ef7a122f1 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_d11702f97397089b0224af9ef7a122f1" created. >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_81fe2c773a661172f7014e85c2f68184 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_81fe2c773a661172f7014e85c2f68184 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d11702f97397089b0224af9ef7a122f1 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d11702f97397089b0224af9ef7a122f1 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_81fe2c773a661172f7014e85c2f68184 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_81fe2c773a661172f7014e85c2f68184 >Result: >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d11702f97397089b0224af9ef7a122f1 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d11702f97397089b0224af9ef7a122f1 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_81fe2c773a661172f7014e85c2f68184/brick >Result: >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 11.580995ms >[asynchttp] INFO 2018/06/08 08:57:20 asynchttp.go:288: Started job b6d4e666dcbba47c0877c64a0f0cfb8f >[heketi] INFO 2018/06/08 08:57:20 Started async operation: Delete Volume >[negroni] Started GET /queue/b6d4e666dcbba47c0877c64a0f0cfb8f >[negroni] Completed 200 OK in 117.014µs >[negroni] Completed 202 Accepted in 26.777963ms >[asynchttp] INFO 2018/06/08 08:57:20 asynchttp.go:288: Started job dc23356206ee34a3d146cb2645e3a104 >[heketi] INFO 2018/06/08 08:57:20 Started async operation: Delete Volume >[negroni] Started GET /queue/dc23356206ee34a3d146cb2645e3a104 >[negroni] Completed 200 OK in 92.719µs >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chown :2003 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_81fe2c773a661172f7014e85c2f68184/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_d11702f97397089b0224af9ef7a122f1 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d11702f97397089b0224af9ef7a122f1 >Result: >[kubeexec] DEBUG 2018/06/08 08:57:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: chmod 2775 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_81fe2c773a661172f7014e85c2f68184/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d11702f97397089b0224af9ef7a122f1/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2001 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d11702f97397089b0224af9ef7a122f1/brick >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 171.094µs >[kubeexec] DEBUG 2018/06/08 08:57:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d11702f97397089b0224af9ef7a122f1/brick >Result: >[cmdexec] INFO 2018/06/08 08:57:21 Creating volume vol_15e0122e942fc41f80666a3714670682 replica 3 >[kubeexec] DEBUG 2018/06/08 08:57:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_e4aaed99528f9d907b84f92e740f0da4 >Result: >[kubeexec] DEBUG 2018/06/08 08:57:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_ad1e5849e9566f1bcaa09cfb9c0b96ef replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_94745514333fff7f22e8d60eff97d635/brick 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_7649722a655922fad2e75ee7af97c506/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_a70d481dd9566d346b4d60131fb23a9c/brick >Result: volume create: vol_ad1e5849e9566f1bcaa09cfb9c0b96ef: success: please start the volume to access data >[kubeexec] DEBUG 2018/06/08 08:57:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 12288K --chunksize 256K --size 2097152K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_e4aaed99528f9d907b84f92e740f0da4 --virtualsize 2097152K --name brick_e4aaed99528f9d907b84f92e740f0da4 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_e4aaed99528f9d907b84f92e740f0da4" created. >[kubeexec] DEBUG 2018/06/08 08:57:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_e4aaed99528f9d907b84f92e740f0da4 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_e4aaed99528f9d907b84f92e740f0da4 isize=512 agcount=8, agsize=65536 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=524288, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 08:57:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_e4aaed99528f9d907b84f92e740f0da4 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_e4aaed99528f9d907b84f92e740f0da4 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/b6d4e666dcbba47c0877c64a0f0cfb8f >[negroni] Completed 200 OK in 106.764µs >[negroni] Started GET /queue/dc23356206ee34a3d146cb2645e3a104 >[negroni] Completed 200 OK in 102.763µs >[kubeexec] DEBUG 2018/06/08 08:57:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_e4aaed99528f9d907b84f92e740f0da4 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_e4aaed99528f9d907b84f92e740f0da4 >Result: >[kubeexec] DEBUG 2018/06/08 08:57:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_e4aaed99528f9d907b84f92e740f0da4/brick >Result: >[kubeexec] DEBUG 2018/06/08 08:57:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chown :2003 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_e4aaed99528f9d907b84f92e740f0da4/brick >Result: >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 141.979µs >[kubeexec] DEBUG 2018/06/08 08:57:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: chmod 2775 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_e4aaed99528f9d907b84f92e740f0da4/brick >Result: >[cmdexec] INFO 2018/06/08 08:57:22 Creating volume vol_b99532640a5201d243193159ee762ae4 replica 3 >[kubeexec] DEBUG 2018/06/08 08:57:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_00c2d7c5fda2de77f930386434235209 > >Result: Logical volume "tp_00c2d7c5fda2de77f930386434235209" successfully removed >[kubeexec] DEBUG 2018/06/08 08:57:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[negroni] Started GET /queue/b6d4e666dcbba47c0877c64a0f0cfb8f >[negroni] Completed 200 OK in 109.234µs >[negroni] Started GET /queue/dc23356206ee34a3d146cb2645e3a104 >[negroni] Completed 200 OK in 79.367µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 110.082µs >[negroni] Started GET /queue/b6d4e666dcbba47c0877c64a0f0cfb8f >[negroni] Completed 200 OK in 118.725µs >[negroni] Started GET /queue/dc23356206ee34a3d146cb2645e3a104 >[negroni] Completed 200 OK in 84.982µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 115.562µs >[kubeexec] DEBUG 2018/06/08 08:57:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_ad1e5849e9566f1bcaa09cfb9c0b96ef >Result: volume start: vol_ad1e5849e9566f1bcaa09cfb9c0b96ef: success >[kubeexec] DEBUG 2018/06/08 08:57:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_15e0122e942fc41f80666a3714670682 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_d11702f97397089b0224af9ef7a122f1/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_070c89e7be12165f271c2906ae9e74b7/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_b6a2ea41f37c6396a2e85a6a41faa54f/brick >Result: volume create: vol_15e0122e942fc41f80666a3714670682: success: please start the volume to access data >[asynchttp] INFO 2018/06/08 08:57:24 asynchttp.go:292: Completed job a8d46afb5176808836420be86f6fdcd0 in 9m18.364970256s >[kubeexec] DEBUG 2018/06/08 08:57:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[negroni] Started GET /queue/b6d4e666dcbba47c0877c64a0f0cfb8f >[negroni] Completed 200 OK in 130.395µs >[negroni] Started GET /queue/dc23356206ee34a3d146cb2645e3a104 >[negroni] Completed 200 OK in 93.518µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 137.875µs >[negroni] Started GET /queue/b6d4e666dcbba47c0877c64a0f0cfb8f >[negroni] Completed 200 OK in 110.133µs >[negroni] Started GET /queue/dc23356206ee34a3d146cb2645e3a104 >[negroni] Completed 200 OK in 135.354µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 116.697µs >[negroni] Started GET /queue/b6d4e666dcbba47c0877c64a0f0cfb8f >[negroni] Completed 200 OK in 110.827µs >[negroni] Started GET /queue/dc23356206ee34a3d146cb2645e3a104 >[negroni] Completed 200 OK in 112.202µs >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 200 OK in 113.34µs >[negroni] Started GET /queue/b6d4e666dcbba47c0877c64a0f0cfb8f >[negroni] Completed 200 OK in 117.379µs >[negroni] Started GET /queue/dc23356206ee34a3d146cb2645e3a104 >[negroni] Completed 200 OK in 109.698µs >[kubeexec] DEBUG 2018/06/08 08:57:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_15e0122e942fc41f80666a3714670682 >Result: volume start: vol_15e0122e942fc41f80666a3714670682: success >[kubeexec] DEBUG 2018/06/08 08:57:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_b99532640a5201d243193159ee762ae4 replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_81fe2c773a661172f7014e85c2f68184/brick 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_e4aaed99528f9d907b84f92e740f0da4/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_9049906389b8fcca23da885bdc1ce8ab/brick >Result: volume create: vol_b99532640a5201d243193159ee762ae4: success: please start the volume to access data >[asynchttp] INFO 2018/06/08 08:57:27 asynchttp.go:292: Completed job 6158d16fba595a299f350a673e859df3 in 8m36.574982417s >[kubeexec] DEBUG 2018/06/08 08:57:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_00c2d7c5fda2de77f930386434235209 >Result: >[heketi] INFO 2018/06/08 08:57:28 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 08:57:28 asynchttp.go:292: Completed job cc7d3ffb84c36288d9c4c22cf55e3ada in 2m52.230056309s >[negroni] Started GET /queue/cc7d3ffb84c36288d9c4c22cf55e3ada >[negroni] Completed 204 No Content in 121.384µs >[negroni] Started DELETE /volumes/966a92ed3e4374a5e634f7b133c49e52 >[negroni] Completed 404 Not Found in 2.577544ms >[negroni] Started GET /queue/b6d4e666dcbba47c0877c64a0f0cfb8f >[negroni] Completed 200 OK in 120.92µs >[negroni] Started GET /queue/dc23356206ee34a3d146cb2645e3a104 >[negroni] Completed 200 OK in 95.77µs >[negroni] Started GET /queue/b6d4e666dcbba47c0877c64a0f0cfb8f >[negroni] Completed 200 OK in 167.322µs >[negroni] Started GET /queue/dc23356206ee34a3d146cb2645e3a104 >[negroni] Completed 200 OK in 103.208µs >[negroni] Started GET /queue/b6d4e666dcbba47c0877c64a0f0cfb8f >[negroni] Completed 200 OK in 172.114µs >[negroni] Started GET /queue/dc23356206ee34a3d146cb2645e3a104 >[negroni] Completed 200 OK in 123.26µs >[negroni] Started GET /queue/b6d4e666dcbba47c0877c64a0f0cfb8f >[negroni] Completed 200 OK in 133.514µs >[negroni] Started GET /queue/dc23356206ee34a3d146cb2645e3a104 >[negroni] Completed 200 OK in 100.826µs >[kubeexec] ERROR 2018/06/08 08:57:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 08:57:32 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] DEBUG 2018/06/08 08:57:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_b99532640a5201d243193159ee762ae4 >Result: volume start: vol_b99532640a5201d243193159ee762ae4: success >[asynchttp] INFO 2018/06/08 08:57:32 asynchttp.go:292: Completed job 078bf2d1330026527a20bcfa4ebc6028 in 7m39.145694872s >[kubeexec] ERROR 2018/06/08 08:57:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 08:57:32 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 08:57:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 08:57:32 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 08:57:32 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 08:57:32 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 08:57:32 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 08:57:32 asynchttp.go:292: Completed job dc23356206ee34a3d146cb2645e3a104 in 11.681906846s >[kubeexec] ERROR 2018/06/08 08:57:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 08:57:32 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:57:32 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:57:32 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:57:32 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 08:57:32 asynchttp.go:292: Completed job b6d4e666dcbba47c0877c64a0f0cfb8f in 11.926922311s >[negroni] Started GET /queue/b6d4e666dcbba47c0877c64a0f0cfb8f >[negroni] Completed 500 Internal Server Error in 176.888µs >[negroni] Started GET /queue/dc23356206ee34a3d146cb2645e3a104 >[negroni] Completed 500 Internal Server Error in 166.683µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 22.79139ms >[asynchttp] INFO 2018/06/08 08:57:50 asynchttp.go:288: Started job 2d7a1ce31edf99d77108eb3b8bcde568 >[heketi] INFO 2018/06/08 08:57:50 Started async operation: Delete Volume >[negroni] Started GET /queue/2d7a1ce31edf99d77108eb3b8bcde568 >[negroni] Completed 200 OK in 116.265µs >[kubeexec] DEBUG 2018/06/08 08:57:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 08:57:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 08:57:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 08:57:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 08:57:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:57:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:57:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:57:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 08:57:51 asynchttp.go:292: Completed job 2d7a1ce31edf99d77108eb3b8bcde568 in 757.020328ms >[negroni] Started GET /queue/2d7a1ce31edf99d77108eb3b8bcde568 >[negroni] Completed 500 Internal Server Error in 174.482µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 22.106422ms >[asynchttp] INFO 2018/06/08 08:58:35 asynchttp.go:288: Started job b2e39025ac3a11ffff7a7a616f379150 >[heketi] INFO 2018/06/08 08:58:35 Started async operation: Delete Volume >[negroni] Started GET /queue/b2e39025ac3a11ffff7a7a616f379150 >[negroni] Completed 200 OK in 149.541µs >[kubeexec] DEBUG 2018/06/08 08:58:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 08:58:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 08:58:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 08:58:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 08:58:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:58:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:58:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 08:58:36 asynchttp.go:292: Completed job b2e39025ac3a11ffff7a7a616f379150 in 813.559265ms >[heketi] ERROR 2018/06/08 08:58:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[negroni] Started GET /queue/b2e39025ac3a11ffff7a7a616f379150 >[negroni] Completed 500 Internal Server Error in 196.566µs >[heketi] INFO 2018/06/08 08:59:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 08:59:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:59:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 35min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:59:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 08:59:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:59:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 35min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 08:59:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 08:59:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 08:59:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 33min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 08:59:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 08:59:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 20.121959ms >[asynchttp] INFO 2018/06/08 08:59:35 asynchttp.go:288: Started job f5a7380365b1a7430a7906fa4bcbab8c >[heketi] INFO 2018/06/08 08:59:35 Started async operation: Delete Volume >[negroni] Started GET /queue/f5a7380365b1a7430a7906fa4bcbab8c >[negroni] Completed 200 OK in 211.034µs >[kubeexec] DEBUG 2018/06/08 08:59:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 08:59:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 08:59:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 08:59:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 08:59:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 08:59:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 08:59:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 08:59:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 08:59:36 asynchttp.go:292: Completed job f5a7380365b1a7430a7906fa4bcbab8c in 771.888072ms >[negroni] Started GET /queue/f5a7380365b1a7430a7906fa4bcbab8c >[negroni] Completed 500 Internal Server Error in 184.089µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 21.372586ms >[asynchttp] INFO 2018/06/08 08:59:50 asynchttp.go:288: Started job eb71a9488b90308975bc0f5e5c20c850 >[heketi] INFO 2018/06/08 08:59:50 Started async operation: Delete Volume >[negroni] Started GET /queue/eb71a9488b90308975bc0f5e5c20c850 >[negroni] Completed 200 OK in 106.679µs >[kubeexec] DEBUG 2018/06/08 08:59:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 08:59:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 08:59:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 08:59:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 08:59:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:59:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:59:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 08:59:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 08:59:51 asynchttp.go:292: Completed job eb71a9488b90308975bc0f5e5c20c850 in 747.007149ms >[negroni] Started GET /queue/eb71a9488b90308975bc0f5e5c20c850 >[negroni] Completed 500 Internal Server Error in 175.836µs >[heketi] INFO 2018/06/08 09:01:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:01:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:01:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 37min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:01:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:01:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:01:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 37min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:01:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:01:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:01:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 35min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:01:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:01:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 18.358197ms >[asynchttp] INFO 2018/06/08 09:01:50 asynchttp.go:288: Started job e830ef7460198b6e03c5a71a4192a538 >[heketi] INFO 2018/06/08 09:01:50 Started async operation: Delete Volume >[negroni] Started GET /queue/e830ef7460198b6e03c5a71a4192a538 >[negroni] Completed 200 OK in 145.858µs >[kubeexec] DEBUG 2018/06/08 09:01:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:01:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:01:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:01:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:01:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:01:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:01:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:01:51 asynchttp.go:292: Completed job e830ef7460198b6e03c5a71a4192a538 in 820.702643ms >[heketi] ERROR 2018/06/08 09:01:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[negroni] Started GET /queue/e830ef7460198b6e03c5a71a4192a538 >[negroni] Completed 500 Internal Server Error in 229.705µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 20.087033ms >[asynchttp] INFO 2018/06/08 09:02:05 asynchttp.go:288: Started job 2e462a58509319083063068ceaf27cb4 >[heketi] INFO 2018/06/08 09:02:05 Started async operation: Delete Volume >[negroni] Started GET /queue/2e462a58509319083063068ceaf27cb4 >[negroni] Completed 200 OK in 104.781µs >[kubeexec] DEBUG 2018/06/08 09:02:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:02:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:02:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:02:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:02:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:02:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:02:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:02:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:02:06 asynchttp.go:292: Completed job 2e462a58509319083063068ceaf27cb4 in 750.031888ms >[negroni] Started GET /queue/2e462a58509319083063068ceaf27cb4 >[negroni] Completed 500 Internal Server Error in 144.254µs >[heketi] INFO 2018/06/08 09:03:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:03:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:03:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 39min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:03:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:03:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:03:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 39min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:03:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:03:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:03:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 37min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:03:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:03:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 18.750892ms >[asynchttp] INFO 2018/06/08 09:04:05 asynchttp.go:288: Started job c5ee7361d0c60e65cd4eda00b14a6cd0 >[heketi] INFO 2018/06/08 09:04:05 Started async operation: Delete Volume >[negroni] Started GET /queue/c5ee7361d0c60e65cd4eda00b14a6cd0 >[negroni] Completed 200 OK in 138.28µs >[kubeexec] DEBUG 2018/06/08 09:04:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:04:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:04:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:04:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:04:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:04:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:04:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:04:06 asynchttp.go:292: Completed job c5ee7361d0c60e65cd4eda00b14a6cd0 in 806.347502ms >[heketi] ERROR 2018/06/08 09:04:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[negroni] Started GET /queue/c5ee7361d0c60e65cd4eda00b14a6cd0 >[negroni] Completed 500 Internal Server Error in 152.503µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 26.075111ms >[asynchttp] INFO 2018/06/08 09:04:20 asynchttp.go:288: Started job 659f5487807f00c05d2f47cb94dc9ce0 >[heketi] INFO 2018/06/08 09:04:20 Started async operation: Delete Volume >[negroni] Started GET /queue/659f5487807f00c05d2f47cb94dc9ce0 >[negroni] Completed 200 OK in 270.265µs >[kubeexec] DEBUG 2018/06/08 09:04:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:04:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:04:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:04:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:04:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:04:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:04:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:04:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:04:21 asynchttp.go:292: Completed job 659f5487807f00c05d2f47cb94dc9ce0 in 816.414975ms >[negroni] Started GET /queue/659f5487807f00c05d2f47cb94dc9ce0 >[negroni] Completed 500 Internal Server Error in 242.004µs >[heketi] INFO 2018/06/08 09:05:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:05:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:05:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 41min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:05:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:05:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:05:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 41min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:05:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:05:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:05:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 39min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:05:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:05:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 37.727617ms >[asynchttp] INFO 2018/06/08 09:06:20 asynchttp.go:288: Started job 1bb8013f2d0b808f8d23139575c059bc >[heketi] INFO 2018/06/08 09:06:20 Started async operation: Delete Volume >[negroni] Started GET /queue/1bb8013f2d0b808f8d23139575c059bc >[negroni] Completed 200 OK in 277.362µs >[kubeexec] DEBUG 2018/06/08 09:06:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:06:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:06:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:06:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:06:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:06:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:06:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:06:21 asynchttp.go:292: Completed job 1bb8013f2d0b808f8d23139575c059bc in 797.286483ms >[heketi] ERROR 2018/06/08 09:06:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[negroni] Started GET /queue/1bb8013f2d0b808f8d23139575c059bc >[negroni] Completed 500 Internal Server Error in 201.559µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 24.2101ms >[asynchttp] INFO 2018/06/08 09:06:35 asynchttp.go:288: Started job b171bff8ed623ff2ea4a417bad8f11b4 >[heketi] INFO 2018/06/08 09:06:35 Started async operation: Delete Volume >[negroni] Started GET /queue/b171bff8ed623ff2ea4a417bad8f11b4 >[negroni] Completed 200 OK in 137.465µs >[kubeexec] DEBUG 2018/06/08 09:06:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:06:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:06:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:06:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:06:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:06:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:06:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:06:36 asynchttp.go:292: Completed job b171bff8ed623ff2ea4a417bad8f11b4 in 808.326604ms >[heketi] ERROR 2018/06/08 09:06:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[negroni] Started GET /queue/b171bff8ed623ff2ea4a417bad8f11b4 >[negroni] Completed 500 Internal Server Error in 257.561µs >[heketi] INFO 2018/06/08 09:07:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:07:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:07:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 43min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:07:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:07:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:07:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 43min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:07:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:07:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:07:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 41min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:07:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:07:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 28.154398ms >[asynchttp] INFO 2018/06/08 09:08:35 asynchttp.go:288: Started job b3820f5403380b01d609e3b2a370a792 >[heketi] INFO 2018/06/08 09:08:35 Started async operation: Delete Volume >[negroni] Started GET /queue/b3820f5403380b01d609e3b2a370a792 >[negroni] Completed 200 OK in 200.353µs >[kubeexec] DEBUG 2018/06/08 09:08:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:08:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:08:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:08:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:08:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:08:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:08:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:08:36 asynchttp.go:292: Completed job b3820f5403380b01d609e3b2a370a792 in 812.932688ms >[heketi] ERROR 2018/06/08 09:08:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[negroni] Started GET /queue/b3820f5403380b01d609e3b2a370a792 >[negroni] Completed 500 Internal Server Error in 187.886µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 35.194116ms >[asynchttp] INFO 2018/06/08 09:08:50 asynchttp.go:288: Started job a1bdf9a292b43ef09b1f60ec2ad7f6fa >[heketi] INFO 2018/06/08 09:08:50 Started async operation: Delete Volume >[negroni] Started GET /queue/a1bdf9a292b43ef09b1f60ec2ad7f6fa >[negroni] Completed 200 OK in 194.726µs >[kubeexec] DEBUG 2018/06/08 09:08:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:08:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:08:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:08:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:08:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:08:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:08:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:08:51 asynchttp.go:292: Completed job a1bdf9a292b43ef09b1f60ec2ad7f6fa in 767.13158ms >[heketi] ERROR 2018/06/08 09:08:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[negroni] Started GET /queue/a1bdf9a292b43ef09b1f60ec2ad7f6fa >[negroni] Completed 500 Internal Server Error in 133.319µs >[heketi] INFO 2018/06/08 09:09:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:09:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:09:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 45min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:09:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:09:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:09:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 45min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:09:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:09:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:09:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 43min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:09:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:09:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 19.103177ms >[asynchttp] INFO 2018/06/08 09:10:50 asynchttp.go:288: Started job bc5e310459e34fcba258116adf39ff85 >[heketi] INFO 2018/06/08 09:10:50 Started async operation: Delete Volume >[negroni] Started GET /queue/bc5e310459e34fcba258116adf39ff85 >[negroni] Completed 200 OK in 160.312µs >[kubeexec] DEBUG 2018/06/08 09:10:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:10:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:10:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:10:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:10:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:10:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:10:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:10:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:10:51 asynchttp.go:292: Completed job bc5e310459e34fcba258116adf39ff85 in 780.127742ms >[negroni] Started GET /queue/bc5e310459e34fcba258116adf39ff85 >[negroni] Completed 500 Internal Server Error in 127.695µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 29.447604ms >[asynchttp] INFO 2018/06/08 09:11:05 asynchttp.go:288: Started job 8f9a059d334e59fddb797480ae337e71 >[heketi] INFO 2018/06/08 09:11:05 Started async operation: Delete Volume >[negroni] Started GET /queue/8f9a059d334e59fddb797480ae337e71 >[negroni] Completed 200 OK in 181.902µs >[kubeexec] DEBUG 2018/06/08 09:11:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:11:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:11:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:11:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:11:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:11:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:11:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:11:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:11:06 asynchttp.go:292: Completed job 8f9a059d334e59fddb797480ae337e71 in 794.580448ms >[negroni] Started GET /queue/8f9a059d334e59fddb797480ae337e71 >[negroni] Completed 500 Internal Server Error in 196.176µs >[heketi] INFO 2018/06/08 09:11:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:11:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:11:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 47min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:11:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:11:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:11:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 47min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:11:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:11:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:11:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 45min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:11:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:11:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 21.512522ms >[asynchttp] INFO 2018/06/08 09:13:05 asynchttp.go:288: Started job dfa50bdad7eded8f0001e1ea503a966d >[heketi] INFO 2018/06/08 09:13:05 Started async operation: Delete Volume >[negroni] Started GET /queue/dfa50bdad7eded8f0001e1ea503a966d >[negroni] Completed 200 OK in 198.658µs >[kubeexec] DEBUG 2018/06/08 09:13:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:13:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:13:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:13:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:13:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:13:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:13:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[negroni] Started GET /queue/dfa50bdad7eded8f0001e1ea503a966d >[negroni] Completed 200 OK in 160.212µs >[asynchttp] INFO 2018/06/08 09:13:06 asynchttp.go:292: Completed job dfa50bdad7eded8f0001e1ea503a966d in 1.024468828s >[heketi] ERROR 2018/06/08 09:13:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[negroni] Started GET /queue/dfa50bdad7eded8f0001e1ea503a966d >[negroni] Completed 500 Internal Server Error in 143.517µs >[heketi] INFO 2018/06/08 09:13:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:13:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:13:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 49min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:13:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:13:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:13:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 49min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:13:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:13:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:13:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 47min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:13:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:13:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 22.094084ms >[asynchttp] INFO 2018/06/08 09:13:20 asynchttp.go:288: Started job e89bb0760ac4b6a89bb60546cdb7d609 >[heketi] INFO 2018/06/08 09:13:20 Started async operation: Delete Volume >[negroni] Started GET /queue/e89bb0760ac4b6a89bb60546cdb7d609 >[negroni] Completed 200 OK in 125.19µs >[kubeexec] DEBUG 2018/06/08 09:13:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:13:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:13:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:13:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:13:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:13:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:13:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:13:21 asynchttp.go:292: Completed job e89bb0760ac4b6a89bb60546cdb7d609 in 748.931699ms >[heketi] ERROR 2018/06/08 09:13:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[negroni] Started GET /queue/e89bb0760ac4b6a89bb60546cdb7d609 >[negroni] Completed 500 Internal Server Error in 207.555µs >[heketi] INFO 2018/06/08 09:15:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:15:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:15:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 51min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:15:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:15:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:15:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 51min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:15:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:15:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:15:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 49min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:15:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:15:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 29.246643ms >[asynchttp] INFO 2018/06/08 09:15:20 asynchttp.go:288: Started job 760396b8378841f7322d46bc920f5d4c >[heketi] INFO 2018/06/08 09:15:20 Started async operation: Delete Volume >[negroni] Started GET /queue/760396b8378841f7322d46bc920f5d4c >[negroni] Completed 200 OK in 216.878µs >[kubeexec] DEBUG 2018/06/08 09:15:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:15:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:15:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:15:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:15:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:15:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:15:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:15:21 asynchttp.go:292: Completed job 760396b8378841f7322d46bc920f5d4c in 778.675918ms >[heketi] ERROR 2018/06/08 09:15:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[negroni] Started GET /queue/760396b8378841f7322d46bc920f5d4c >[negroni] Completed 500 Internal Server Error in 199.251µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 42.510563ms >[asynchttp] INFO 2018/06/08 09:15:35 asynchttp.go:288: Started job b3d15906db427e0a5b0864aa90ed15a1 >[heketi] INFO 2018/06/08 09:15:35 Started async operation: Delete Volume >[negroni] Started GET /queue/b3d15906db427e0a5b0864aa90ed15a1 >[negroni] Completed 200 OK in 193.702µs >[kubeexec] DEBUG 2018/06/08 09:15:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:15:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:15:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:15:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:15:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:15:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:15:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:15:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:15:36 asynchttp.go:292: Completed job b3d15906db427e0a5b0864aa90ed15a1 in 745.212334ms >[negroni] Started GET /queue/b3d15906db427e0a5b0864aa90ed15a1 >[negroni] Completed 500 Internal Server Error in 318.424µs >[heketi] INFO 2018/06/08 09:17:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:17:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:17:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 53min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:17:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:17:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:17:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 53min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:17:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:17:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:17:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 51min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:17:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:17:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 25.289857ms >[asynchttp] INFO 2018/06/08 09:17:35 asynchttp.go:288: Started job 5eaabce77f4632aab4be6741b7f05da1 >[heketi] INFO 2018/06/08 09:17:35 Started async operation: Delete Volume >[negroni] Started GET /queue/5eaabce77f4632aab4be6741b7f05da1 >[negroni] Completed 200 OK in 116.773µs >[kubeexec] DEBUG 2018/06/08 09:17:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:17:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:17:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:17:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:17:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:17:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:17:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:17:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:17:36 asynchttp.go:292: Completed job 5eaabce77f4632aab4be6741b7f05da1 in 752.531389ms >[negroni] Started GET /queue/5eaabce77f4632aab4be6741b7f05da1 >[negroni] Completed 500 Internal Server Error in 241.619µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 22.642307ms >[asynchttp] INFO 2018/06/08 09:17:50 asynchttp.go:288: Started job accc37f885102c8e16bf64fc130baa05 >[heketi] INFO 2018/06/08 09:17:50 Started async operation: Delete Volume >[negroni] Started GET /queue/accc37f885102c8e16bf64fc130baa05 >[negroni] Completed 200 OK in 159.331µs >[kubeexec] DEBUG 2018/06/08 09:17:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:17:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:17:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:17:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:17:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:17:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:17:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:17:51 asynchttp.go:292: Completed job accc37f885102c8e16bf64fc130baa05 in 801.482539ms >[heketi] ERROR 2018/06/08 09:17:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[negroni] Started GET /queue/accc37f885102c8e16bf64fc130baa05 >[negroni] Completed 500 Internal Server Error in 129.086µs >[heketi] INFO 2018/06/08 09:19:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:19:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:19:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 55min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:19:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:19:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:19:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 55min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:19:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:19:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:19:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 53min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:19:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:19:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 37.664927ms >[asynchttp] INFO 2018/06/08 09:19:50 asynchttp.go:288: Started job a87500ed17aaa8058a96f89376065a33 >[heketi] INFO 2018/06/08 09:19:50 Started async operation: Delete Volume >[negroni] Started GET /queue/a87500ed17aaa8058a96f89376065a33 >[negroni] Completed 200 OK in 140.491µs >[kubeexec] DEBUG 2018/06/08 09:19:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:19:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:19:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:19:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:19:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:19:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:19:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:19:51 asynchttp.go:292: Completed job a87500ed17aaa8058a96f89376065a33 in 763.735341ms >[heketi] ERROR 2018/06/08 09:19:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[negroni] Started GET /queue/a87500ed17aaa8058a96f89376065a33 >[negroni] Completed 500 Internal Server Error in 136.414µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 36.092438ms >[asynchttp] INFO 2018/06/08 09:20:05 asynchttp.go:288: Started job 64761d056f81ad1c415cd0ad05fd737f >[heketi] INFO 2018/06/08 09:20:05 Started async operation: Delete Volume >[negroni] Started GET /queue/64761d056f81ad1c415cd0ad05fd737f >[negroni] Completed 200 OK in 108.592µs >[kubeexec] DEBUG 2018/06/08 09:20:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:20:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:20:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:20:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:20:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:20:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:20:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:20:06 asynchttp.go:292: Completed job 64761d056f81ad1c415cd0ad05fd737f in 734.563436ms >[heketi] ERROR 2018/06/08 09:20:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[negroni] Started GET /queue/64761d056f81ad1c415cd0ad05fd737f >[negroni] Completed 500 Internal Server Error in 194.271µs >[heketi] INFO 2018/06/08 09:21:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:21:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:21:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 57min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:21:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:21:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:21:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 57min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:21:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:21:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:21:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 55min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:21:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:21:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 17.042807ms >[asynchttp] INFO 2018/06/08 09:22:05 asynchttp.go:288: Started job 19c38fef7ce5cca7827d4664e3c834d4 >[heketi] INFO 2018/06/08 09:22:05 Started async operation: Delete Volume >[negroni] Started GET /queue/19c38fef7ce5cca7827d4664e3c834d4 >[negroni] Completed 200 OK in 187.572µs >[kubeexec] DEBUG 2018/06/08 09:22:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:22:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:22:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:22:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:22:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:22:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:22:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:22:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:22:06 asynchttp.go:292: Completed job 19c38fef7ce5cca7827d4664e3c834d4 in 791.373785ms >[negroni] Started GET /queue/19c38fef7ce5cca7827d4664e3c834d4 >[negroni] Completed 500 Internal Server Error in 312.924µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 33.337412ms >[asynchttp] INFO 2018/06/08 09:22:20 asynchttp.go:288: Started job 01c49a4f8b40bb2ce11f2efce29cff1c >[heketi] INFO 2018/06/08 09:22:20 Started async operation: Delete Volume >[negroni] Started GET /queue/01c49a4f8b40bb2ce11f2efce29cff1c >[negroni] Completed 200 OK in 178.872µs >[kubeexec] DEBUG 2018/06/08 09:22:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:22:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:22:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:22:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:22:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:22:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:22:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:22:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:22:21 asynchttp.go:292: Completed job 01c49a4f8b40bb2ce11f2efce29cff1c in 762.286487ms >[negroni] Started GET /queue/01c49a4f8b40bb2ce11f2efce29cff1c >[negroni] Completed 500 Internal Server Error in 200.842µs >[heketi] INFO 2018/06/08 09:23:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:23:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:23:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 2h 59min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:23:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:23:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:23:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 2h 59min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:23:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:23:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:23:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 57min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:23:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:23:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 21.866363ms >[asynchttp] INFO 2018/06/08 09:24:20 asynchttp.go:288: Started job f41582fe34d6cdebdee5f904c64d6207 >[heketi] INFO 2018/06/08 09:24:20 Started async operation: Delete Volume >[negroni] Started GET /queue/f41582fe34d6cdebdee5f904c64d6207 >[negroni] Completed 200 OK in 121.01µs >[kubeexec] DEBUG 2018/06/08 09:24:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:24:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:24:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:24:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:24:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:24:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:24:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:24:21 asynchttp.go:292: Completed job f41582fe34d6cdebdee5f904c64d6207 in 806.748405ms >[heketi] ERROR 2018/06/08 09:24:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[negroni] Started GET /queue/f41582fe34d6cdebdee5f904c64d6207 >[negroni] Completed 500 Internal Server Error in 255.449µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 27.973522ms >[asynchttp] INFO 2018/06/08 09:24:35 asynchttp.go:288: Started job a4cd993cad9645319aff2135f818a8fc >[heketi] INFO 2018/06/08 09:24:35 Started async operation: Delete Volume >[negroni] Started GET /queue/a4cd993cad9645319aff2135f818a8fc >[negroni] Completed 200 OK in 212.125µs >[kubeexec] DEBUG 2018/06/08 09:24:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:24:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:24:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:24:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:24:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:24:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:24:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:24:36 asynchttp.go:292: Completed job a4cd993cad9645319aff2135f818a8fc in 775.990664ms >[heketi] ERROR 2018/06/08 09:24:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[negroni] Started GET /queue/a4cd993cad9645319aff2135f818a8fc >[negroni] Completed 500 Internal Server Error in 195.298µs >[heketi] INFO 2018/06/08 09:25:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:25:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:25:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 3h 1min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:25:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:25:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:25:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 3h 1min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:25:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:25:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:25:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 1h 59min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:25:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:25:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 30.343578ms >[asynchttp] INFO 2018/06/08 09:26:35 asynchttp.go:288: Started job 03af57740675b5e7d8d6372fdbff500e >[heketi] INFO 2018/06/08 09:26:35 Started async operation: Delete Volume >[negroni] Started GET /queue/03af57740675b5e7d8d6372fdbff500e >[negroni] Completed 200 OK in 181.205µs >[kubeexec] DEBUG 2018/06/08 09:26:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:26:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:26:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:26:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:26:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:26:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:26:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:26:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:26:36 asynchttp.go:292: Completed job 03af57740675b5e7d8d6372fdbff500e in 790.272346ms >[negroni] Started GET /queue/03af57740675b5e7d8d6372fdbff500e >[negroni] Completed 500 Internal Server Error in 190.478µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 21.589174ms >[asynchttp] INFO 2018/06/08 09:26:50 asynchttp.go:288: Started job cc678afc81636b0b6c9d6d0381163441 >[heketi] INFO 2018/06/08 09:26:50 Started async operation: Delete Volume >[negroni] Started GET /queue/cc678afc81636b0b6c9d6d0381163441 >[negroni] Completed 200 OK in 185.029µs >[kubeexec] DEBUG 2018/06/08 09:26:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:26:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:26:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:26:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:26:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:26:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:26:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:26:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:26:51 asynchttp.go:292: Completed job cc678afc81636b0b6c9d6d0381163441 in 756.897861ms >[negroni] Started GET /queue/cc678afc81636b0b6c9d6d0381163441 >[negroni] Completed 500 Internal Server Error in 291.308µs >[heketi] INFO 2018/06/08 09:27:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:27:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:27:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 3h 3min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:27:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:27:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:27:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 3h 3min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:27:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:27:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:27:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 2h 1min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:27:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:27:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 17.496768ms >[asynchttp] INFO 2018/06/08 09:28:50 asynchttp.go:288: Started job 8e9d7af5d01821c5c89264a81e1b5e2c >[heketi] INFO 2018/06/08 09:28:50 Started async operation: Delete Volume >[negroni] Started GET /queue/8e9d7af5d01821c5c89264a81e1b5e2c >[negroni] Completed 200 OK in 174.149µs >[kubeexec] DEBUG 2018/06/08 09:28:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:28:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:28:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:28:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:28:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:28:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:28:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:28:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:28:51 asynchttp.go:292: Completed job 8e9d7af5d01821c5c89264a81e1b5e2c in 794.826895ms >[negroni] Started GET /queue/8e9d7af5d01821c5c89264a81e1b5e2c >[negroni] Completed 500 Internal Server Error in 315.014µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 30.337823ms >[asynchttp] INFO 2018/06/08 09:29:05 asynchttp.go:288: Started job a599a3a23058856d84483803f3353abe >[heketi] INFO 2018/06/08 09:29:05 Started async operation: Delete Volume >[negroni] Started GET /queue/a599a3a23058856d84483803f3353abe >[negroni] Completed 200 OK in 262.228µs >[kubeexec] DEBUG 2018/06/08 09:29:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:29:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:29:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:29:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:29:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:29:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:29:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:29:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:29:06 asynchttp.go:292: Completed job a599a3a23058856d84483803f3353abe in 751.08926ms >[negroni] Started GET /queue/a599a3a23058856d84483803f3353abe >[negroni] Completed 500 Internal Server Error in 278.173µs >[heketi] INFO 2018/06/08 09:29:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:29:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:29:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 3h 5min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:29:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:29:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:29:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 3h 5min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:29:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:29:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:29:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 2h 3min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:29:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:29:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 23.437629ms >[asynchttp] INFO 2018/06/08 09:31:05 asynchttp.go:288: Started job 8ab0e7402a67844eb8d72184bdd9a568 >[heketi] INFO 2018/06/08 09:31:05 Started async operation: Delete Volume >[negroni] Started GET /queue/8ab0e7402a67844eb8d72184bdd9a568 >[negroni] Completed 200 OK in 200.984µs >[kubeexec] DEBUG 2018/06/08 09:31:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:31:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:31:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:31:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:31:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:31:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:31:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:31:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:31:06 asynchttp.go:292: Completed job 8ab0e7402a67844eb8d72184bdd9a568 in 786.928306ms >[negroni] Started GET /queue/8ab0e7402a67844eb8d72184bdd9a568 >[negroni] Completed 500 Internal Server Error in 206.658µs >[heketi] INFO 2018/06/08 09:31:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:31:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:31:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 3h 7min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:31:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:31:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:31:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 3h 7min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:31:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:31:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:31:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 2h 5min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:31:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:31:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 17.798254ms >[asynchttp] INFO 2018/06/08 09:31:20 asynchttp.go:288: Started job 2d31621f1091a07096dbce4dc581e4f0 >[heketi] INFO 2018/06/08 09:31:20 Started async operation: Delete Volume >[negroni] Started GET /queue/2d31621f1091a07096dbce4dc581e4f0 >[negroni] Completed 200 OK in 160.609µs >[kubeexec] DEBUG 2018/06/08 09:31:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:31:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:31:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:31:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:31:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:31:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:31:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:31:21 asynchttp.go:292: Completed job 2d31621f1091a07096dbce4dc581e4f0 in 793.21031ms >[heketi] ERROR 2018/06/08 09:31:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[negroni] Started GET /queue/2d31621f1091a07096dbce4dc581e4f0 >[negroni] Completed 500 Internal Server Error in 283.044µs >[heketi] INFO 2018/06/08 09:33:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:33:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:33:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 3h 9min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:33:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:33:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:33:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 3h 9min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:33:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:33:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:33:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 2h 7min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:33:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:33:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 25.540196ms >[asynchttp] INFO 2018/06/08 09:33:20 asynchttp.go:288: Started job c2d4c096e925be6f447c2ae998426ef2 >[heketi] INFO 2018/06/08 09:33:20 Started async operation: Delete Volume >[negroni] Started GET /queue/c2d4c096e925be6f447c2ae998426ef2 >[negroni] Completed 200 OK in 178.048µs >[kubeexec] DEBUG 2018/06/08 09:33:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:33:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:33:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:33:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:33:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:33:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:33:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:33:21 asynchttp.go:292: Completed job c2d4c096e925be6f447c2ae998426ef2 in 740.28419ms >[heketi] ERROR 2018/06/08 09:33:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[negroni] Started GET /queue/c2d4c096e925be6f447c2ae998426ef2 >[negroni] Completed 500 Internal Server Error in 142.768µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 19.328309ms >[asynchttp] INFO 2018/06/08 09:33:35 asynchttp.go:288: Started job a499293e1304c9f67982b8be149d2217 >[heketi] INFO 2018/06/08 09:33:35 Started async operation: Delete Volume >[negroni] Started GET /queue/a499293e1304c9f67982b8be149d2217 >[negroni] Completed 200 OK in 120.546µs >[kubeexec] DEBUG 2018/06/08 09:33:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:33:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:33:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:33:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:33:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:33:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:33:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:33:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:33:36 asynchttp.go:292: Completed job a499293e1304c9f67982b8be149d2217 in 766.3984ms >[negroni] Started GET /queue/a499293e1304c9f67982b8be149d2217 >[negroni] Completed 500 Internal Server Error in 366.906µs >[heketi] INFO 2018/06/08 09:35:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:35:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:35:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 3h 11min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:35:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:35:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:35:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 3h 11min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:35:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:35:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:35:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 2h 9min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:35:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:35:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 29.803998ms >[asynchttp] INFO 2018/06/08 09:35:35 asynchttp.go:288: Started job ac3d999d5150760dc7ab46387dd6e55f >[heketi] INFO 2018/06/08 09:35:35 Started async operation: Delete Volume >[negroni] Started GET /queue/ac3d999d5150760dc7ab46387dd6e55f >[negroni] Completed 200 OK in 201.549µs >[kubeexec] DEBUG 2018/06/08 09:35:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:35:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:35:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:35:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:35:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:35:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:35:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:35:36 asynchttp.go:292: Completed job ac3d999d5150760dc7ab46387dd6e55f in 751.999697ms >[heketi] ERROR 2018/06/08 09:35:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[negroni] Started GET /queue/ac3d999d5150760dc7ab46387dd6e55f >[negroni] Completed 500 Internal Server Error in 184.613µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 30.364725ms >[asynchttp] INFO 2018/06/08 09:35:50 asynchttp.go:288: Started job b618307c047c4b54578a12d534399dae >[heketi] INFO 2018/06/08 09:35:50 Started async operation: Delete Volume >[negroni] Started GET /queue/b618307c047c4b54578a12d534399dae >[negroni] Completed 200 OK in 173.858µs >[kubeexec] DEBUG 2018/06/08 09:35:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:35:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:35:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:35:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:35:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:35:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:35:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:35:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:35:51 asynchttp.go:292: Completed job b618307c047c4b54578a12d534399dae in 773.380988ms >[negroni] Started GET /queue/b618307c047c4b54578a12d534399dae >[negroni] Completed 500 Internal Server Error in 192.758µs >[heketi] INFO 2018/06/08 09:37:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:37:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:37:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 3h 13min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:37:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:37:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:37:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 3h 13min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:37:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:37:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:37:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 2h 11min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:37:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:37:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 33.841851ms >[asynchttp] INFO 2018/06/08 09:37:50 asynchttp.go:288: Started job ca99e89505f5d0eb0db763e7d6005e54 >[heketi] INFO 2018/06/08 09:37:50 Started async operation: Delete Volume >[negroni] Started GET /queue/ca99e89505f5d0eb0db763e7d6005e54 >[negroni] Completed 200 OK in 181.622µs >[kubeexec] DEBUG 2018/06/08 09:37:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:37:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:37:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:37:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:37:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:37:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:37:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:37:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:37:51 asynchttp.go:292: Completed job ca99e89505f5d0eb0db763e7d6005e54 in 805.641713ms >[negroni] Started GET /queue/ca99e89505f5d0eb0db763e7d6005e54 >[negroni] Completed 500 Internal Server Error in 137.435µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 24.741111ms >[asynchttp] INFO 2018/06/08 09:38:05 asynchttp.go:288: Started job d9257a81c6c1968a88ffa07c728881e3 >[heketi] INFO 2018/06/08 09:38:05 Started async operation: Delete Volume >[negroni] Started GET /queue/d9257a81c6c1968a88ffa07c728881e3 >[negroni] Completed 200 OK in 190.371µs >[kubeexec] DEBUG 2018/06/08 09:38:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:38:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:38:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:38:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:38:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:38:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:38:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:38:06 asynchttp.go:292: Completed job d9257a81c6c1968a88ffa07c728881e3 in 767.315424ms >[heketi] ERROR 2018/06/08 09:38:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[negroni] Started GET /queue/d9257a81c6c1968a88ffa07c728881e3 >[negroni] Completed 500 Internal Server Error in 240.335µs >[heketi] INFO 2018/06/08 09:39:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:39:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:39:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 3h 15min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:39:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:39:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:39:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 3h 15min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:39:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:39:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:39:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 2h 13min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:39:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:39:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 19.771509ms >[asynchttp] INFO 2018/06/08 09:40:05 asynchttp.go:288: Started job d51bb402b86fdf52fd5cd76853c9c9cc >[heketi] INFO 2018/06/08 09:40:05 Started async operation: Delete Volume >[negroni] Started GET /queue/d51bb402b86fdf52fd5cd76853c9c9cc >[negroni] Completed 200 OK in 145.967µs >[kubeexec] DEBUG 2018/06/08 09:40:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:40:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:40:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:40:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:40:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:40:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:40:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:40:06 asynchttp.go:292: Completed job d51bb402b86fdf52fd5cd76853c9c9cc in 814.931023ms >[heketi] ERROR 2018/06/08 09:40:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[negroni] Started GET /queue/d51bb402b86fdf52fd5cd76853c9c9cc >[negroni] Completed 500 Internal Server Error in 279.118µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 28.397208ms >[asynchttp] INFO 2018/06/08 09:40:20 asynchttp.go:288: Started job d4f67b69f3e6b075c72d0efac5f1bbe4 >[heketi] INFO 2018/06/08 09:40:20 Started async operation: Delete Volume >[negroni] Started GET /queue/d4f67b69f3e6b075c72d0efac5f1bbe4 >[negroni] Completed 200 OK in 175.312µs >[kubeexec] DEBUG 2018/06/08 09:40:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:40:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:40:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:40:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:40:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:40:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:40:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:40:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:40:21 asynchttp.go:292: Completed job d4f67b69f3e6b075c72d0efac5f1bbe4 in 769.429683ms >[negroni] Started GET /queue/d4f67b69f3e6b075c72d0efac5f1bbe4 >[negroni] Completed 500 Internal Server Error in 127.35µs >[heketi] INFO 2018/06/08 09:41:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:41:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:41:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 3h 17min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:41:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:41:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:41:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 3h 17min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:41:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:41:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:41:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 2h 15min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:41:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:41:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 27.846178ms >[asynchttp] INFO 2018/06/08 09:42:20 asynchttp.go:288: Started job aefbe0a3936e46d0016846316d349426 >[heketi] INFO 2018/06/08 09:42:20 Started async operation: Delete Volume >[negroni] Started GET /queue/aefbe0a3936e46d0016846316d349426 >[negroni] Completed 200 OK in 170.571µs >[kubeexec] DEBUG 2018/06/08 09:42:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:42:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:42:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:42:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:42:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:42:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:42:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:42:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:42:21 asynchttp.go:292: Completed job aefbe0a3936e46d0016846316d349426 in 819.987698ms >[negroni] Started GET /queue/aefbe0a3936e46d0016846316d349426 >[negroni] Completed 500 Internal Server Error in 149.111µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 20.758907ms >[asynchttp] INFO 2018/06/08 09:42:35 asynchttp.go:288: Started job c90347d62327f9658943dbc7cb811e91 >[heketi] INFO 2018/06/08 09:42:35 Started async operation: Delete Volume >[negroni] Started GET /queue/c90347d62327f9658943dbc7cb811e91 >[negroni] Completed 200 OK in 179.143µs >[kubeexec] DEBUG 2018/06/08 09:42:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:42:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:42:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:42:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:42:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:42:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:42:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:42:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:42:36 asynchttp.go:292: Completed job c90347d62327f9658943dbc7cb811e91 in 805.062677ms >[negroni] Started GET /queue/c90347d62327f9658943dbc7cb811e91 >[negroni] Completed 500 Internal Server Error in 212.155µs >[heketi] INFO 2018/06/08 09:43:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:43:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:43:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 3h 19min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:43:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:43:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:43:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 3h 19min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:43:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:43:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:43:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 2h 17min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:43:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:43:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 31.76546ms >[asynchttp] INFO 2018/06/08 09:44:35 asynchttp.go:288: Started job ea262c8ac9568e549031800202ecac77 >[heketi] INFO 2018/06/08 09:44:35 Started async operation: Delete Volume >[negroni] Started GET /queue/ea262c8ac9568e549031800202ecac77 >[negroni] Completed 200 OK in 169.885µs >[kubeexec] DEBUG 2018/06/08 09:44:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:44:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:44:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:44:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:44:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:44:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:44:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:44:36 asynchttp.go:292: Completed job ea262c8ac9568e549031800202ecac77 in 787.844236ms >[heketi] ERROR 2018/06/08 09:44:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[negroni] Started GET /queue/ea262c8ac9568e549031800202ecac77 >[negroni] Completed 500 Internal Server Error in 250.854µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[asynchttp] INFO 2018/06/08 09:44:50 asynchttp.go:288: Started job 6428433f8176b0c80ce6d06033f7adff >[heketi] INFO 2018/06/08 09:44:50 Started async operation: Delete Volume >[negroni] Completed 202 Accepted in 26.332473ms >[negroni] Started GET /queue/6428433f8176b0c80ce6d06033f7adff >[negroni] Completed 200 OK in 177.537µs >[kubeexec] DEBUG 2018/06/08 09:44:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:44:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:44:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:44:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:44:51 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:44:51 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:44:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:44:51 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:44:51 asynchttp.go:292: Completed job 6428433f8176b0c80ce6d06033f7adff in 787.913606ms >[negroni] Started GET /queue/6428433f8176b0c80ce6d06033f7adff >[negroni] Completed 500 Internal Server Error in 187.988µs >[heketi] INFO 2018/06/08 09:45:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:45:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:45:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 3h 21min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ20535 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:45:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:45:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:45:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 3h 21min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ21076 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:45:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:45:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:45:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 2h 19min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ18989 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:45:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:45:14 Cleaned 0 nodes from health cache >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 09:46:07 Allocating brick set #0 >[negroni] Completed 202 Accepted in 55.299668ms >[asynchttp] INFO 2018/06/08 09:46:07 asynchttp.go:288: Started job 89c9429a70d0111e1376276bdec3c860 >[heketi] INFO 2018/06/08 09:46:07 Started async operation: Create Volume >[negroni] Started GET /queue/89c9429a70d0111e1376276bdec3c860 >[negroni] Completed 200 OK in 140.95µs >[heketi] INFO 2018/06/08 09:46:07 Creating brick c6c4ef98245b2956a8e3481fb27eb337 >[heketi] INFO 2018/06/08 09:46:07 Creating brick af30399c48acd8c5470723926d83d4c5 >[heketi] INFO 2018/06/08 09:46:07 Creating brick 8f286a6ba640556678264621e234d19d >[kubeexec] DEBUG 2018/06/08 09:46:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_af30399c48acd8c5470723926d83d4c5 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8f286a6ba640556678264621e234d19d >Result: >[kubeexec] DEBUG 2018/06/08 09:46:07 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_c6c4ef98245b2956a8e3481fb27eb337 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_3a4297677881963e3f80124971d50eea/tp_8f286a6ba640556678264621e234d19d --virtualsize 1048576K --name brick_8f286a6ba640556678264621e234d19d >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_8f286a6ba640556678264621e234d19d" created. >[kubeexec] DEBUG 2018/06/08 09:46:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_d389f0278a774bd7443a09af960961d8/tp_af30399c48acd8c5470723926d83d4c5 --virtualsize 1048576K --name brick_af30399c48acd8c5470723926d83d4c5 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_af30399c48acd8c5470723926d83d4c5" created. >[kubeexec] DEBUG 2018/06/08 09:46:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_9394bc70699b006c5460c9f654cf345f/tp_c6c4ef98245b2956a8e3481fb27eb337 --virtualsize 1048576K --name brick_c6c4ef98245b2956a8e3481fb27eb337 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_c6c4ef98245b2956a8e3481fb27eb337" created. >[kubeexec] DEBUG 2018/06/08 09:46:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_8f286a6ba640556678264621e234d19d >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_8f286a6ba640556678264621e234d19d isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 09:46:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_c6c4ef98245b2956a8e3481fb27eb337 >Result: meta-data=/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_c6c4ef98245b2956a8e3481fb27eb337 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 09:46:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_af30399c48acd8c5470723926d83d4c5 >Result: meta-data=/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_af30399c48acd8c5470723926d83d4c5 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 09:46:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_8f286a6ba640556678264621e234d19d /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8f286a6ba640556678264621e234d19d xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 09:46:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_c6c4ef98245b2956a8e3481fb27eb337 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_c6c4ef98245b2956a8e3481fb27eb337 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 09:46:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_af30399c48acd8c5470723926d83d4c5 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_af30399c48acd8c5470723926d83d4c5 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[negroni] Started GET /queue/89c9429a70d0111e1376276bdec3c860 >[negroni] Completed 200 OK in 122.729µs >[kubeexec] DEBUG 2018/06/08 09:46:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_c6c4ef98245b2956a8e3481fb27eb337 /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_c6c4ef98245b2956a8e3481fb27eb337 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_8f286a6ba640556678264621e234d19d /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8f286a6ba640556678264621e234d19d >Result: >[kubeexec] DEBUG 2018/06/08 09:46:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_af30399c48acd8c5470723926d83d4c5 /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_af30399c48acd8c5470723926d83d4c5 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_c6c4ef98245b2956a8e3481fb27eb337/brick >Result: >[kubeexec] DEBUG 2018/06/08 09:46:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8f286a6ba640556678264621e234d19d/brick >Result: >[kubeexec] DEBUG 2018/06/08 09:46:08 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_af30399c48acd8c5470723926d83d4c5/brick >Result: >[cmdexec] INFO 2018/06/08 09:46:08 Creating volume vol_5e468916a8596cd844231548ccb61fc4 replica 3 >[kubeexec] DEBUG 2018/06/08 09:46:09 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_5e468916a8596cd844231548ccb61fc4 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8f286a6ba640556678264621e234d19d/brick 10.70.47.76:/var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_c6c4ef98245b2956a8e3481fb27eb337/brick 10.70.46.187:/var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_af30399c48acd8c5470723926d83d4c5/brick >Result: volume create: vol_5e468916a8596cd844231548ccb61fc4: success: please start the volume to access data >[negroni] Started GET /queue/89c9429a70d0111e1376276bdec3c860 >[negroni] Completed 200 OK in 163.892µs >[negroni] Started GET /queue/89c9429a70d0111e1376276bdec3c860 >[negroni] Completed 200 OK in 142.503µs >[kubeexec] DEBUG 2018/06/08 09:46:11 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_5e468916a8596cd844231548ccb61fc4 >Result: volume start: vol_5e468916a8596cd844231548ccb61fc4: success >[heketi] INFO 2018/06/08 09:46:11 Create Volume succeeded >[asynchttp] INFO 2018/06/08 09:46:11 asynchttp.go:292: Completed job 89c9429a70d0111e1376276bdec3c860 in 3.79732313s >[negroni] Started GET /queue/89c9429a70d0111e1376276bdec3c860 >[negroni] Completed 303 See Other in 142.507µs >[negroni] Started GET /volumes/5e468916a8596cd844231548ccb61fc4 >[negroni] Completed 200 OK in 5.547411ms >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 1.010554ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 489.884µs >[negroni] Started GET /volumes/07ef5105131fa51c35a9007ee213ea7a >[negroni] Completed 200 OK in 2.821207ms >[negroni] Started GET /volumes/15e0122e942fc41f80666a3714670682 >[negroni] Completed 200 OK in 1.507181ms >[negroni] Started GET /volumes/1918777ef3ce84df17c8a114fb89f33e >[negroni] Completed 200 OK in 2.182215ms >[negroni] Started GET /volumes/1a828ff2310b778d09ffdadd755dc5ee >[negroni] Completed 200 OK in 1.562983ms >[negroni] Started GET /volumes/1ef58be42cf8ea7cf9298cff303e903a >[negroni] Completed 200 OK in 2.178457ms >[negroni] Started GET /volumes/21481d8911fe8ec238d97d71c1aa5cb3 >[negroni] Completed 200 OK in 631.992µs >[negroni] Started GET /volumes/226838416791f3286fcacb7e5f1ff59d >[negroni] Completed 200 OK in 2.025712ms >[negroni] Started GET /volumes/2bf097c60bd8b38bfcb4327727ca5681 >[negroni] Completed 200 OK in 1.559063ms >[negroni] Started GET /volumes/337bf2c01bf8c45eec5bab53ad5c2e46 >[negroni] Completed 200 OK in 2.012177ms >[negroni] Started GET /volumes/43194b98d83e8b61b376ccc54f79333a >[negroni] Completed 200 OK in 1.019834ms >[negroni] Started GET /volumes/5d0dfb0ebb846fcd225c890ec9cdb885 >[negroni] Completed 200 OK in 2.146701ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.485189ms >[negroni] Started GET /volumes/5e468916a8596cd844231548ccb61fc4 >[negroni] Completed 200 OK in 1.540386ms >[negroni] Started GET /volumes/7d5a429f821efc8e8fe3f29569732b86 >[negroni] Completed 200 OK in 1.550956ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 2.035948ms >[negroni] Started GET /volumes/8757685f765bbd74556cbf75086c88f6 >[negroni] Completed 200 OK in 568.778µs >[negroni] Started GET /volumes/89ebb1e7eed2ff557488996a1657e75e >[negroni] Completed 200 OK in 1.141758ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.386182ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.571171ms >[negroni] Started GET /volumes/9f9fb17746d0aad637b132875d2744e5 >[negroni] Completed 200 OK in 989.229µs >[negroni] Started GET /volumes/9fb2830da79dd70d910dad8426dc236f >[negroni] Completed 200 OK in 604.467µs >[negroni] Started GET /volumes/a6be87754541710c38b420381c76fb8c >[negroni] Completed 200 OK in 1.400676ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.55612ms >[negroni] Started GET /volumes/a8678c97e2708cf6e00aea160a4d46a0 >[negroni] Completed 200 OK in 518.633µs >[negroni] Started GET /volumes/ad1e5849e9566f1bcaa09cfb9c0b96ef >[negroni] Completed 200 OK in 1.453336ms >[negroni] Started GET /volumes/aee5088d2304cb95535752ef85f9f392 >[negroni] Completed 200 OK in 594.324µs >[negroni] Started GET /volumes/afef5f4a84abec3ab3e4f2a5bae2db23 >[negroni] Completed 200 OK in 1.310227ms >[negroni] Started GET /volumes/b7fb8f19c77039982909b8868f7af4cc >[negroni] Completed 200 OK in 566.924µs >[negroni] Started GET /volumes/b8f78f128f20b61ef032ce9ee5b6481c >[negroni] Completed 200 OK in 1.118721ms >[negroni] Started GET /volumes/b99532640a5201d243193159ee762ae4 >[negroni] Completed 200 OK in 881.186µs >[negroni] Started GET /volumes/bf31af76ef6c54e7e8f24f4d8711cb22 >[negroni] Completed 200 OK in 1.121639ms >[negroni] Started GET /volumes/c1dce5388e8c136a89e4a25e4cc97821 >[negroni] Completed 200 OK in 588.052µs >[negroni] Started GET /volumes/cc8a686464bb4017c91ba7294ee1b091 >[negroni] Completed 200 OK in 1.120988ms >[negroni] Started GET /volumes/d21ed39ae095af2674175a798a0cb02c >[negroni] Completed 200 OK in 572.974µs >[negroni] Started GET /volumes/daf13b7f607d1b280c78e909af25a215 >[negroni] Completed 200 OK in 1.052123ms >[negroni] Started GET /volumes/dc9ab13a25ccbad8262fba92766a31f9 >[negroni] Completed 200 OK in 1.027713ms >[negroni] Started GET /volumes/dcaf25d0becadd0bb3c732c2c2ca27da >[negroni] Completed 200 OK in 558.699µs >[negroni] Started GET /volumes/e25438438fd2d50a0b07f26b4bfb338a >[negroni] Completed 200 OK in 1.070052ms >[negroni] Started GET /volumes/f73270331b95278f490fd1dfe0b010df >[negroni] Completed 200 OK in 661.391µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 4.345761ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 4.148076ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 3.664459ms >[negroni] Started POST /devices >[heketi] INFO 2018/06/08 09:46:12 Adding device /dev/sdf to node 70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 202 Accepted in 9.172133ms >[asynchttp] INFO 2018/06/08 09:46:12 asynchttp.go:288: Started job 79fc2599d8706785b113872a5ec9fdc1 >[negroni] Started GET /queue/79fc2599d8706785b113872a5ec9fdc1 >[negroni] Completed 200 OK in 155.794µs >[kubeexec] DEBUG 2018/06/08 09:46:12 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: pvcreate --metadatasize=128M --dataalignment=256K '/dev/sdf' >Result: Physical volume "/dev/sdf" successfully created. >[kubeexec] DEBUG 2018/06/08 09:46:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: vgcreate --autobackup=n vg_f6e4e4c13df4918c0c15e24e24828609 /dev/sdf >Result: Volume group "vg_f6e4e4c13df4918c0c15e24e24828609" successfully created >[negroni] Started GET /queue/79fc2599d8706785b113872a5ec9fdc1 >[negroni] Completed 200 OK in 118.883µs >[kubeexec] DEBUG 2018/06/08 09:46:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: vgdisplay -c vg_f6e4e4c13df4918c0c15e24e24828609 >Result: vg_f6e4e4c13df4918c0c15e24e24828609:r/w:772:-1:0:0:0:-1:0:1:1:104722432:4096:25567:0:25567:mYTw1o-GBnR-HUKF-0DK9-Nvy6-pAg8-DaSKx0 >[cmdexec] DEBUG 2018/06/08 09:46:13 /src/github.com/heketi/heketi/executors/cmdexec/device.go:147: Size of /dev/sdf in dhcp46-122.lab.eng.blr.redhat.com is 104722432 >[heketi] INFO 2018/06/08 09:46:13 Added device /dev/sdf >[asynchttp] INFO 2018/06/08 09:46:13 asynchttp.go:292: Completed job 79fc2599d8706785b113872a5ec9fdc1 in 1.144119331s >[negroni] Started GET /queue/79fc2599d8706785b113872a5ec9fdc1 >[negroni] Completed 204 No Content in 180.292µs >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 3.263757ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 499.333µs >[negroni] Started GET /volumes/07ef5105131fa51c35a9007ee213ea7a >[negroni] Completed 200 OK in 4.979446ms >[negroni] Started GET /volumes/15e0122e942fc41f80666a3714670682 >[negroni] Completed 200 OK in 1.869612ms >[negroni] Started GET /volumes/1918777ef3ce84df17c8a114fb89f33e >[negroni] Completed 200 OK in 2.879328ms >[negroni] Started GET /volumes/1a828ff2310b778d09ffdadd755dc5ee >[negroni] Completed 200 OK in 1.394615ms >[negroni] Started GET /volumes/1ef58be42cf8ea7cf9298cff303e903a >[negroni] Completed 200 OK in 2.69488ms >[negroni] Started GET /volumes/21481d8911fe8ec238d97d71c1aa5cb3 >[negroni] Completed 200 OK in 981.1µs >[negroni] Started GET /volumes/226838416791f3286fcacb7e5f1ff59d >[negroni] Completed 200 OK in 2.576768ms >[negroni] Started GET /volumes/2bf097c60bd8b38bfcb4327727ca5681 >[negroni] Completed 200 OK in 2.143052ms >[negroni] Started GET /volumes/337bf2c01bf8c45eec5bab53ad5c2e46 >[negroni] Completed 200 OK in 2.593174ms >[negroni] Started GET /volumes/43194b98d83e8b61b376ccc54f79333a >[negroni] Completed 200 OK in 1.306531ms >[negroni] Started GET /volumes/5d0dfb0ebb846fcd225c890ec9cdb885 >[negroni] Completed 200 OK in 2.145851ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 2.840117ms >[negroni] Started GET /volumes/5e468916a8596cd844231548ccb61fc4 >[negroni] Completed 200 OK in 2.450625ms >[negroni] Started GET /volumes/7d5a429f821efc8e8fe3f29569732b86 >[negroni] Completed 200 OK in 1.767407ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 2.579905ms >[negroni] Started GET /volumes/8757685f765bbd74556cbf75086c88f6 >[negroni] Completed 200 OK in 1.070595ms >[negroni] Started GET /volumes/89ebb1e7eed2ff557488996a1657e75e >[negroni] Completed 200 OK in 2.009945ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 2.533524ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 2.105931ms >[negroni] Started GET /volumes/9f9fb17746d0aad637b132875d2744e5 >[negroni] Completed 200 OK in 1.634921ms >[negroni] Started GET /volumes/9fb2830da79dd70d910dad8426dc236f >[negroni] Completed 200 OK in 530.44µs >[negroni] Started GET /volumes/a6be87754541710c38b420381c76fb8c >[negroni] Completed 200 OK in 2.341443ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.595538ms >[negroni] Started GET /volumes/a8678c97e2708cf6e00aea160a4d46a0 >[negroni] Completed 200 OK in 1.024672ms >[negroni] Started GET /volumes/ad1e5849e9566f1bcaa09cfb9c0b96ef >[negroni] Completed 200 OK in 1.761094ms >[negroni] Started GET /volumes/aee5088d2304cb95535752ef85f9f392 >[negroni] Completed 200 OK in 1.021799ms >[negroni] Started GET /volumes/afef5f4a84abec3ab3e4f2a5bae2db23 >[negroni] Completed 200 OK in 1.065468ms >[negroni] Started GET /volumes/b7fb8f19c77039982909b8868f7af4cc >[negroni] Completed 200 OK in 570.97µs >[negroni] Started GET /volumes/b8f78f128f20b61ef032ce9ee5b6481c >[negroni] Completed 200 OK in 3.571203ms >[negroni] Started GET /volumes/b99532640a5201d243193159ee762ae4 >[negroni] Completed 200 OK in 567.961µs >[negroni] Started GET /volumes/bf31af76ef6c54e7e8f24f4d8711cb22 >[negroni] Completed 200 OK in 2.452355ms >[negroni] Started GET /volumes/c1dce5388e8c136a89e4a25e4cc97821 >[negroni] Completed 200 OK in 727.737µs >[negroni] Started GET /volumes/cc8a686464bb4017c91ba7294ee1b091 >[negroni] Completed 200 OK in 2.234857ms >[negroni] Started GET /volumes/d21ed39ae095af2674175a798a0cb02c >[negroni] Completed 200 OK in 915.635µs >[negroni] Started GET /volumes/daf13b7f607d1b280c78e909af25a215 >[negroni] Completed 200 OK in 1.058686ms >[negroni] Started GET /volumes/dc9ab13a25ccbad8262fba92766a31f9 >[negroni] Completed 200 OK in 1.160171ms >[negroni] Started GET /volumes/dcaf25d0becadd0bb3c732c2c2ca27da >[negroni] Completed 200 OK in 687.712µs >[negroni] Started GET /volumes/e25438438fd2d50a0b07f26b4bfb338a >[negroni] Completed 200 OK in 1.263717ms >[negroni] Started GET /volumes/f73270331b95278f490fd1dfe0b010df >[negroni] Completed 200 OK in 673.294µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 7.131237ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 4.28294ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 4.174437ms >[negroni] Started POST /devices/f6e4e4c13df4918c0c15e24e24828609/state >[negroni] Completed 202 Accepted in 412.578µs >[asynchttp] INFO 2018/06/08 09:46:14 asynchttp.go:288: Started job c7e2e278b9c46482dba755d08dec882e >[negroni] Started GET /queue/c7e2e278b9c46482dba755d08dec882e >[negroni] Completed 200 OK in 189.448µs >[asynchttp] INFO 2018/06/08 09:46:14 asynchttp.go:292: Completed job c7e2e278b9c46482dba755d08dec882e in 16.999273ms >[negroni] Started GET /queue/c7e2e278b9c46482dba755d08dec882e >[negroni] Completed 204 No Content in 137.678µs >[negroni] Started POST /devices/f6e4e4c13df4918c0c15e24e24828609/state >[negroni] Completed 202 Accepted in 3.838258ms >[asynchttp] INFO 2018/06/08 09:46:15 asynchttp.go:288: Started job e3b26b41b8396a6085cd5cc3dee0fe8e >[heketi] INFO 2018/06/08 09:46:15 Running Remove Device >[negroni] Started GET /queue/e3b26b41b8396a6085cd5cc3dee0fe8e >[negroni] Completed 200 OK in 147.133µs >[asynchttp] INFO 2018/06/08 09:46:15 asynchttp.go:292: Completed job e3b26b41b8396a6085cd5cc3dee0fe8e in 16.467713ms >[negroni] Started GET /queue/e3b26b41b8396a6085cd5cc3dee0fe8e >[negroni] Completed 204 No Content in 229.441µs >[negroni] Started DELETE /devices/f6e4e4c13df4918c0c15e24e24828609 >[heketi] INFO 2018/06/08 09:46:16 Deleting device f6e4e4c13df4918c0c15e24e24828609 on node 70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 202 Accepted in 1.703447ms >[asynchttp] INFO 2018/06/08 09:46:16 asynchttp.go:288: Started job c4965ed118a6d367683cec4c41b33d71 >[negroni] Started GET /queue/c4965ed118a6d367683cec4c41b33d71 >[negroni] Completed 200 OK in 99.843µs >[kubeexec] DEBUG 2018/06/08 09:46:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: vgremove vg_f6e4e4c13df4918c0c15e24e24828609 >Result: Volume group "vg_f6e4e4c13df4918c0c15e24e24828609" successfully removed >[kubeexec] DEBUG 2018/06/08 09:46:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: pvremove '/dev/sdf' >Result: Labels on physical volume "/dev/sdf" successfully wiped. >[kubeexec] ERROR 2018/06/08 09:46:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [ls /var/lib/heketi/mounts/vg_f6e4e4c13df4918c0c15e24e24828609] on glusterfs-storage-pg4xc: Err[command terminated with exit code 2]: Stdout []: Stderr [ls: cannot access /var/lib/heketi/mounts/vg_f6e4e4c13df4918c0c15e24e24828609: No such file or directory >] >[heketi] INFO 2018/06/08 09:46:17 Deleted node [f6e4e4c13df4918c0c15e24e24828609] >[asynchttp] INFO 2018/06/08 09:46:17 asynchttp.go:292: Completed job c4965ed118a6d367683cec4c41b33d71 in 818.099975ms >[negroni] Started GET /queue/c4965ed118a6d367683cec4c41b33d71 >[negroni] Completed 204 No Content in 180.642µs >[negroni] Started GET /clusters >[negroni] Completed 200 OK in 2.364399ms >[negroni] Started GET /clusters/0a73c60efdd4673113b668afea101e6d >[negroni] Completed 200 OK in 261.699µs >[negroni] Started GET /volumes/07ef5105131fa51c35a9007ee213ea7a >[negroni] Completed 200 OK in 3.449701ms >[negroni] Started GET /volumes/15e0122e942fc41f80666a3714670682 >[negroni] Completed 200 OK in 1.568599ms >[negroni] Started GET /volumes/1918777ef3ce84df17c8a114fb89f33e >[negroni] Completed 200 OK in 2.326236ms >[negroni] Started GET /volumes/1a828ff2310b778d09ffdadd755dc5ee >[negroni] Completed 200 OK in 1.484134ms >[negroni] Started GET /volumes/1ef58be42cf8ea7cf9298cff303e903a >[negroni] Completed 200 OK in 2.536054ms >[negroni] Started GET /volumes/21481d8911fe8ec238d97d71c1aa5cb3 >[negroni] Completed 200 OK in 620.071µs >[negroni] Started GET /volumes/226838416791f3286fcacb7e5f1ff59d >[negroni] Completed 200 OK in 1.630189ms >[negroni] Started GET /volumes/2bf097c60bd8b38bfcb4327727ca5681 >[negroni] Completed 200 OK in 1.244244ms >[negroni] Started GET /volumes/337bf2c01bf8c45eec5bab53ad5c2e46 >[negroni] Completed 200 OK in 1.767543ms >[negroni] Started GET /volumes/43194b98d83e8b61b376ccc54f79333a >[negroni] Completed 200 OK in 1.014975ms >[negroni] Started GET /volumes/5d0dfb0ebb846fcd225c890ec9cdb885 >[negroni] Completed 200 OK in 1.495311ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.950881ms >[negroni] Started GET /volumes/5e468916a8596cd844231548ccb61fc4 >[negroni] Completed 200 OK in 1.498553ms >[negroni] Started GET /volumes/7d5a429f821efc8e8fe3f29569732b86 >[negroni] Completed 200 OK in 983.138µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.395604ms >[negroni] Started GET /volumes/8757685f765bbd74556cbf75086c88f6 >[negroni] Completed 200 OK in 546.023µs >[negroni] Started GET /volumes/89ebb1e7eed2ff557488996a1657e75e >[negroni] Completed 200 OK in 1.090569ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.550387ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 1.570478ms >[negroni] Started GET /volumes/9f9fb17746d0aad637b132875d2744e5 >[negroni] Completed 200 OK in 1.506316ms >[negroni] Started GET /volumes/9fb2830da79dd70d910dad8426dc236f >[negroni] Completed 200 OK in 575.326µs >[negroni] Started GET /volumes/a6be87754541710c38b420381c76fb8c >[negroni] Completed 200 OK in 1.898366ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 3.546511ms >[negroni] Started GET /volumes/a8678c97e2708cf6e00aea160a4d46a0 >[negroni] Completed 200 OK in 1.051006ms >[negroni] Started GET /volumes/ad1e5849e9566f1bcaa09cfb9c0b96ef >[negroni] Completed 200 OK in 1.63748ms >[negroni] Started GET /volumes/aee5088d2304cb95535752ef85f9f392 >[negroni] Completed 200 OK in 1.018125ms >[negroni] Started GET /volumes/afef5f4a84abec3ab3e4f2a5bae2db23 >[negroni] Completed 200 OK in 2.551033ms >[negroni] Started GET /volumes/b7fb8f19c77039982909b8868f7af4cc >[negroni] Completed 200 OK in 975.547µs >[negroni] Started GET /volumes/b8f78f128f20b61ef032ce9ee5b6481c >[negroni] Completed 200 OK in 1.341515ms >[negroni] Started GET /volumes/b99532640a5201d243193159ee762ae4 >[negroni] Completed 200 OK in 597.472µs >[negroni] Started GET /volumes/bf31af76ef6c54e7e8f24f4d8711cb22 >[negroni] Completed 200 OK in 1.011749ms >[negroni] Started GET /volumes/c1dce5388e8c136a89e4a25e4cc97821 >[negroni] Completed 200 OK in 556.759µs >[negroni] Started GET /volumes/cc8a686464bb4017c91ba7294ee1b091 >[negroni] Completed 200 OK in 1.131836ms >[negroni] Started GET /volumes/d21ed39ae095af2674175a798a0cb02c >[negroni] Completed 200 OK in 996.355µs >[negroni] Started GET /volumes/daf13b7f607d1b280c78e909af25a215 >[negroni] Completed 200 OK in 979.207µs >[negroni] Started GET /volumes/dc9ab13a25ccbad8262fba92766a31f9 >[negroni] Completed 200 OK in 1.600281ms >[negroni] Started GET /volumes/dcaf25d0becadd0bb3c732c2c2ca27da >[negroni] Completed 200 OK in 976.322µs >[negroni] Started GET /volumes/e25438438fd2d50a0b07f26b4bfb338a >[negroni] Completed 200 OK in 1.432427ms >[negroni] Started GET /volumes/f73270331b95278f490fd1dfe0b010df >[negroni] Completed 200 OK in 929.041µs >[negroni] Started GET /nodes/278bd6b4e16a8e62ef15aaae22e6abc1 >[negroni] Completed 200 OK in 6.659692ms >[negroni] Started GET /nodes/70423cc0bcd044fe5ba8bbbd256a3e49 >[negroni] Completed 200 OK in 3.813247ms >[negroni] Started GET /nodes/d942fe6c0ee5691b7cc263968f97b650 >[negroni] Completed 200 OK in 4.0463ms >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 09:46:18 Allocating brick set #0 >[negroni] Completed 202 Accepted in 23.590211ms >[asynchttp] INFO 2018/06/08 09:46:18 asynchttp.go:288: Started job 83fd9f1d6abffe2a8c2d47a2a9dc9349 >[heketi] INFO 2018/06/08 09:46:18 Started async operation: Create Volume >[negroni] Started GET /queue/83fd9f1d6abffe2a8c2d47a2a9dc9349 >[negroni] Completed 200 OK in 110.35µs >[heketi] INFO 2018/06/08 09:46:18 Creating brick 7d65ac61f84807c36a3615033d19a5d6 >[heketi] INFO 2018/06/08 09:46:18 Creating brick 7bb12d9946ceefb336013d020f571cf6 >[heketi] INFO 2018/06/08 09:46:18 Creating brick eaeff3df9462b6b153a88941afeca9a3 >[kubeexec] DEBUG 2018/06/08 09:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_eaeff3df9462b6b153a88941afeca9a3 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7bb12d9946ceefb336013d020f571cf6 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7d65ac61f84807c36a3615033d19a5d6 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_3a4297677881963e3f80124971d50eea/tp_eaeff3df9462b6b153a88941afeca9a3 --virtualsize 1048576K --name brick_eaeff3df9462b6b153a88941afeca9a3 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_eaeff3df9462b6b153a88941afeca9a3" created. >[kubeexec] DEBUG 2018/06/08 09:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_7bb12d9946ceefb336013d020f571cf6 --virtualsize 1048576K --name brick_7bb12d9946ceefb336013d020f571cf6 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_7bb12d9946ceefb336013d020f571cf6" created. >[kubeexec] DEBUG 2018/06/08 09:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_7d65ac61f84807c36a3615033d19a5d6 --virtualsize 1048576K --name brick_7d65ac61f84807c36a3615033d19a5d6 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_7d65ac61f84807c36a3615033d19a5d6" created. >[kubeexec] DEBUG 2018/06/08 09:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_eaeff3df9462b6b153a88941afeca9a3 >Result: meta-data=/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_eaeff3df9462b6b153a88941afeca9a3 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 09:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_7bb12d9946ceefb336013d020f571cf6 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_7bb12d9946ceefb336013d020f571cf6 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 09:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7d65ac61f84807c36a3615033d19a5d6 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7d65ac61f84807c36a3615033d19a5d6 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 09:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_eaeff3df9462b6b153a88941afeca9a3 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_eaeff3df9462b6b153a88941afeca9a3 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 09:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_7bb12d9946ceefb336013d020f571cf6 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7bb12d9946ceefb336013d020f571cf6 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 09:46:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7d65ac61f84807c36a3615033d19a5d6 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7d65ac61f84807c36a3615033d19a5d6 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 09:46:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_eaeff3df9462b6b153a88941afeca9a3 /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_eaeff3df9462b6b153a88941afeca9a3 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_7bb12d9946ceefb336013d020f571cf6 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7bb12d9946ceefb336013d020f571cf6 >Result: >[negroni] Started GET /queue/83fd9f1d6abffe2a8c2d47a2a9dc9349 >[negroni] Completed 200 OK in 118.831µs >[kubeexec] DEBUG 2018/06/08 09:46:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7d65ac61f84807c36a3615033d19a5d6 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7d65ac61f84807c36a3615033d19a5d6 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_eaeff3df9462b6b153a88941afeca9a3/brick >Result: >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 6.822273ms >[negroni] Started GET /volumes/07ef5105131fa51c35a9007ee213ea7a >[negroni] Completed 200 OK in 1.861715ms >[negroni] Started GET /volumes/15e0122e942fc41f80666a3714670682 >[negroni] Completed 200 OK in 1.96889ms >[negroni] Started GET /volumes/1918777ef3ce84df17c8a114fb89f33e >[negroni] Completed 200 OK in 2.149022ms >[negroni] Started GET /volumes/1a828ff2310b778d09ffdadd755dc5ee >[negroni] Completed 200 OK in 2.005926ms >[negroni] Started GET /volumes/1ef58be42cf8ea7cf9298cff303e903a >[kubeexec] DEBUG 2018/06/08 09:46:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7bb12d9946ceefb336013d020f571cf6/brick >Result: >[negroni] Completed 200 OK in 2.626192ms >[negroni] Started GET /volumes/21481d8911fe8ec238d97d71c1aa5cb3 >[negroni] Completed 200 OK in 555.675µs >[negroni] Started GET /volumes/226838416791f3286fcacb7e5f1ff59d >[negroni] Completed 200 OK in 1.434678ms >[negroni] Started GET /volumes/2bf097c60bd8b38bfcb4327727ca5681 >[kubeexec] DEBUG 2018/06/08 09:46:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7d65ac61f84807c36a3615033d19a5d6/brick >Result: >[cmdexec] INFO 2018/06/08 09:46:19 Creating volume vol_0ea9a5540c7e0eac6dee700f278e79ac replica 3 >[negroni] Completed 200 OK in 2.454009ms >[negroni] Started GET /volumes/337bf2c01bf8c45eec5bab53ad5c2e46 >[negroni] Completed 200 OK in 1.229778ms >[negroni] Started GET /volumes/43194b98d83e8b61b376ccc54f79333a >[negroni] Completed 200 OK in 535.703µs >[negroni] Started GET /volumes/5d0dfb0ebb846fcd225c890ec9cdb885 >[negroni] Completed 200 OK in 1.265394ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.263281ms >[negroni] Started GET /volumes/5e468916a8596cd844231548ccb61fc4 >[negroni] Completed 200 OK in 1.547351ms >[negroni] Started GET /volumes/7d5a429f821efc8e8fe3f29569732b86 >[negroni] Completed 200 OK in 3.102714ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.281991ms >[negroni] Started GET /volumes/8757685f765bbd74556cbf75086c88f6 >[negroni] Completed 200 OK in 501.166µs >[negroni] Started GET /volumes/89ebb1e7eed2ff557488996a1657e75e >[negroni] Completed 200 OK in 1.056443ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 982.542µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 959.923µs >[negroni] Started GET /volumes/9f9fb17746d0aad637b132875d2744e5 >[negroni] Completed 200 OK in 531.702µs >[negroni] Started GET /volumes/9fb2830da79dd70d910dad8426dc236f >[negroni] Completed 200 OK in 601.335µs >[negroni] Started GET /volumes/a6be87754541710c38b420381c76fb8c >[negroni] Completed 200 OK in 1.532824ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 497.624µs >[negroni] Started GET /volumes/a8678c97e2708cf6e00aea160a4d46a0 >[negroni] Completed 200 OK in 829.415µs >[negroni] Started GET /volumes/ad1e5849e9566f1bcaa09cfb9c0b96ef >[negroni] Completed 200 OK in 504.136µs >[negroni] Started GET /volumes/aee5088d2304cb95535752ef85f9f392 >[negroni] Completed 200 OK in 1.008875ms >[negroni] Started GET /volumes/afef5f4a84abec3ab3e4f2a5bae2db23 >[negroni] Completed 200 OK in 829.617µs >[negroni] Started GET /volumes/b7fb8f19c77039982909b8868f7af4cc >[negroni] Completed 200 OK in 862.123µs >[negroni] Started GET /volumes/b8f78f128f20b61ef032ce9ee5b6481c >[negroni] Completed 200 OK in 804.264µs >[negroni] Started GET /volumes/b99532640a5201d243193159ee762ae4 >[negroni] Completed 200 OK in 506.644µs >[negroni] Started GET /volumes/bf31af76ef6c54e7e8f24f4d8711cb22 >[negroni] Completed 200 OK in 469.703µs >[negroni] Started GET /volumes/c1dce5388e8c136a89e4a25e4cc97821 >[negroni] Completed 200 OK in 513.717µs >[negroni] Started GET /volumes/cc8a686464bb4017c91ba7294ee1b091 >[negroni] Completed 200 OK in 558.682µs >[negroni] Started GET /volumes/d21ed39ae095af2674175a798a0cb02c >[negroni] Completed 200 OK in 500.178µs >[negroni] Started GET /volumes/daf13b7f607d1b280c78e909af25a215 >[negroni] Completed 200 OK in 1.837017ms >[negroni] Started GET /volumes/dc9ab13a25ccbad8262fba92766a31f9 >[negroni] Completed 200 OK in 525.563µs >[negroni] Started GET /volumes/dcaf25d0becadd0bb3c732c2c2ca27da >[negroni] Completed 200 OK in 507.402µs >[negroni] Started GET /volumes/e25438438fd2d50a0b07f26b4bfb338a >[negroni] Completed 200 OK in 814.735µs >[negroni] Started GET /volumes/f73270331b95278f490fd1dfe0b010df >[negroni] Completed 200 OK in 529.836µs >[kubeexec] DEBUG 2018/06/08 09:46:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume create vol_0ea9a5540c7e0eac6dee700f278e79ac replica 3 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7d65ac61f84807c36a3615033d19a5d6/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7bb12d9946ceefb336013d020f571cf6/brick 10.70.46.122:/var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_eaeff3df9462b6b153a88941afeca9a3/brick >Result: volume create: vol_0ea9a5540c7e0eac6dee700f278e79ac: success: please start the volume to access data >[negroni] Started GET /queue/83fd9f1d6abffe2a8c2d47a2a9dc9349 >[negroni] Completed 200 OK in 133.461µs >[negroni] Started GET /queue/83fd9f1d6abffe2a8c2d47a2a9dc9349 >[negroni] Completed 200 OK in 141.199µs >[kubeexec] DEBUG 2018/06/08 09:46:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume start vol_0ea9a5540c7e0eac6dee700f278e79ac >Result: volume start: vol_0ea9a5540c7e0eac6dee700f278e79ac: success >[heketi] INFO 2018/06/08 09:46:21 Create Volume succeeded >[asynchttp] INFO 2018/06/08 09:46:21 asynchttp.go:292: Completed job 83fd9f1d6abffe2a8c2d47a2a9dc9349 in 3.691364677s >[negroni] Started GET /queue/83fd9f1d6abffe2a8c2d47a2a9dc9349 >[negroni] Completed 303 See Other in 181.019µs >[negroni] Started GET /volumes/0ea9a5540c7e0eac6dee700f278e79ac >[negroni] Completed 200 OK in 3.897644ms >[negroni] Started DELETE /volumes/0ea9a5540c7e0eac6dee700f278e79ac >[negroni] Completed 202 Accepted in 18.312145ms >[asynchttp] INFO 2018/06/08 09:46:22 asynchttp.go:288: Started job ea8fa0a98a4c9c258d7b8678033d370a >[heketi] INFO 2018/06/08 09:46:22 Started async operation: Delete Volume >[negroni] Started GET /queue/ea8fa0a98a4c9c258d7b8678033d370a >[negroni] Completed 200 OK in 116.794µs >[kubeexec] DEBUG 2018/06/08 09:46:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script snapshot list vol_0ea9a5540c7e0eac6dee700f278e79ac --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/ea8fa0a98a4c9c258d7b8678033d370a >[negroni] Completed 200 OK in 219.549µs >[negroni] Started GET /queue/ea8fa0a98a4c9c258d7b8678033d370a >[negroni] Completed 200 OK in 162.721µs >[negroni] Started GET /queue/ea8fa0a98a4c9c258d7b8678033d370a >[negroni] Completed 200 OK in 144.128µs >[kubeexec] DEBUG 2018/06/08 09:46:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume stop vol_0ea9a5540c7e0eac6dee700f278e79ac force >Result: volume stop: vol_0ea9a5540c7e0eac6dee700f278e79ac: success >[kubeexec] DEBUG 2018/06/08 09:46:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: gluster --mode=script volume delete vol_0ea9a5540c7e0eac6dee700f278e79ac >Result: volume delete: vol_0ea9a5540c7e0eac6dee700f278e79ac: success >[heketi] INFO 2018/06/08 09:46:25 Deleting brick 7d65ac61f84807c36a3615033d19a5d6 >[heketi] INFO 2018/06/08 09:46:25 Deleting brick 7bb12d9946ceefb336013d020f571cf6 >[heketi] INFO 2018/06/08 09:46:25 Deleting brick eaeff3df9462b6b153a88941afeca9a3 >[kubeexec] DEBUG 2018/06/08 09:46:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7d65ac61f84807c36a3615033d19a5d6 | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7d65ac61f84807c36a3615033d19a5d6 >[kubeexec] DEBUG 2018/06/08 09:46:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_eaeff3df9462b6b153a88941afeca9a3 | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_eaeff3df9462b6b153a88941afeca9a3 >[kubeexec] DEBUG 2018/06/08 09:46:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7bb12d9946ceefb336013d020f571cf6 | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_7bb12d9946ceefb336013d020f571cf6 >[kubeexec] DEBUG 2018/06/08 09:46:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_eaeff3df9462b6b153a88941afeca9a3 > >Result: vg_3a4297677881963e3f80124971d50eea/tp_eaeff3df9462b6b153a88941afeca9a3 >[negroni] Started GET /queue/ea8fa0a98a4c9c258d7b8678033d370a >[negroni] Completed 200 OK in 274.928µs >[kubeexec] DEBUG 2018/06/08 09:46:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_7bb12d9946ceefb336013d020f571cf6 > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_7bb12d9946ceefb336013d020f571cf6 >[kubeexec] DEBUG 2018/06/08 09:46:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7d65ac61f84807c36a3615033d19a5d6 > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_7d65ac61f84807c36a3615033d19a5d6 >[negroni] Started GET /queue/ea8fa0a98a4c9c258d7b8678033d370a >[negroni] Completed 200 OK in 250.708µs >[kubeexec] DEBUG 2018/06/08 09:46:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_eaeff3df9462b6b153a88941afeca9a3 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7bb12d9946ceefb336013d020f571cf6 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:27 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7d65ac61f84807c36a3615033d19a5d6 >Result: >[negroni] Started GET /queue/ea8fa0a98a4c9c258d7b8678033d370a >[negroni] Completed 200 OK in 157.967µs >[kubeexec] DEBUG 2018/06/08 09:46:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_eaeff3df9462b6b153a88941afeca9a3/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 09:46:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_7bb12d9946ceefb336013d020f571cf6/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 09:46:28 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_7d65ac61f84807c36a3615033d19a5d6/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/ea8fa0a98a4c9c258d7b8678033d370a >[negroni] Completed 200 OK in 224.055µs >[kubeexec] DEBUG 2018/06/08 09:46:29 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_eaeff3df9462b6b153a88941afeca9a3 > >Result: Logical volume "brick_eaeff3df9462b6b153a88941afeca9a3" successfully removed >[kubeexec] DEBUG 2018/06/08 09:46:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_7bb12d9946ceefb336013d020f571cf6 > >Result: Logical volume "brick_7bb12d9946ceefb336013d020f571cf6" successfully removed >[negroni] Started GET /queue/ea8fa0a98a4c9c258d7b8678033d370a >[negroni] Completed 200 OK in 130.391µs >[kubeexec] DEBUG 2018/06/08 09:46:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_7d65ac61f84807c36a3615033d19a5d6 > >Result: Logical volume "brick_7d65ac61f84807c36a3615033d19a5d6" successfully removed >[kubeexec] DEBUG 2018/06/08 09:46:30 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_eaeff3df9462b6b153a88941afeca9a3 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 09:46:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_7bb12d9946ceefb336013d020f571cf6 > >Result: 0 >[negroni] Started GET /queue/ea8fa0a98a4c9c258d7b8678033d370a >[negroni] Completed 200 OK in 132.589µs >[kubeexec] DEBUG 2018/06/08 09:46:31 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_7d65ac61f84807c36a3615033d19a5d6 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 09:46:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_eaeff3df9462b6b153a88941afeca9a3 > >Result: Logical volume "tp_eaeff3df9462b6b153a88941afeca9a3" successfully removed >[negroni] Started GET /queue/ea8fa0a98a4c9c258d7b8678033d370a >[negroni] Completed 200 OK in 154.623µs >[kubeexec] DEBUG 2018/06/08 09:46:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_7bb12d9946ceefb336013d020f571cf6 > >Result: Logical volume "tp_7bb12d9946ceefb336013d020f571cf6" successfully removed >[kubeexec] DEBUG 2018/06/08 09:46:32 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_7d65ac61f84807c36a3615033d19a5d6 > >Result: Logical volume "tp_7d65ac61f84807c36a3615033d19a5d6" successfully removed >[kubeexec] DEBUG 2018/06/08 09:46:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_eaeff3df9462b6b153a88941afeca9a3 >Result: >[negroni] Started GET /queue/ea8fa0a98a4c9c258d7b8678033d370a >[negroni] Completed 200 OK in 139.089µs >[kubeexec] DEBUG 2018/06/08 09:46:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_7bb12d9946ceefb336013d020f571cf6 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:33 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_7d65ac61f84807c36a3615033d19a5d6 >Result: >[heketi] INFO 2018/06/08 09:46:33 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 09:46:33 asynchttp.go:292: Completed job ea8fa0a98a4c9c258d7b8678033d370a in 11.52867039s >[negroni] Started GET /queue/ea8fa0a98a4c9c258d7b8678033d370a >[negroni] Completed 204 No Content in 235.419µs >[negroni] Started POST /devices/f6e4e4c13df4918c0c15e24e24828609/state >[negroni] Completed 404 Not Found in 2.2765ms >[negroni] Started DELETE /volumes/5e468916a8596cd844231548ccb61fc4 >[negroni] Completed 202 Accepted in 12.763485ms >[asynchttp] INFO 2018/06/08 09:46:34 asynchttp.go:288: Started job 9a32fb2da0f6f19e8e275255e90e3ef6 >[heketi] INFO 2018/06/08 09:46:34 Started async operation: Delete Volume >[negroni] Started GET /queue/9a32fb2da0f6f19e8e275255e90e3ef6 >[negroni] Completed 200 OK in 166.004µs >[kubeexec] DEBUG 2018/06/08 09:46:34 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5e468916a8596cd844231548ccb61fc4 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/9a32fb2da0f6f19e8e275255e90e3ef6 >[negroni] Completed 200 OK in 221.052µs >[negroni] Started GET /queue/9a32fb2da0f6f19e8e275255e90e3ef6 >[negroni] Completed 200 OK in 203.811µs >[kubeexec] DEBUG 2018/06/08 09:46:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume stop vol_5e468916a8596cd844231548ccb61fc4 force >Result: volume stop: vol_5e468916a8596cd844231548ccb61fc4: success >[kubeexec] DEBUG 2018/06/08 09:46:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume delete vol_5e468916a8596cd844231548ccb61fc4 >Result: volume delete: vol_5e468916a8596cd844231548ccb61fc4: success >[heketi] INFO 2018/06/08 09:46:37 Deleting brick 8f286a6ba640556678264621e234d19d >[heketi] INFO 2018/06/08 09:46:37 Deleting brick af30399c48acd8c5470723926d83d4c5 >[heketi] INFO 2018/06/08 09:46:37 Deleting brick c6c4ef98245b2956a8e3481fb27eb337 >[kubeexec] DEBUG 2018/06/08 09:46:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_c6c4ef98245b2956a8e3481fb27eb337 | cut -d" " -f1 >Result: /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_c6c4ef98245b2956a8e3481fb27eb337 >[kubeexec] DEBUG 2018/06/08 09:46:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_af30399c48acd8c5470723926d83d4c5 | cut -d" " -f1 >Result: /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_af30399c48acd8c5470723926d83d4c5 >[kubeexec] DEBUG 2018/06/08 09:46:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8f286a6ba640556678264621e234d19d | cut -d" " -f1 >Result: /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_8f286a6ba640556678264621e234d19d >[kubeexec] DEBUG 2018/06/08 09:46:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_c6c4ef98245b2956a8e3481fb27eb337 > >Result: vg_9394bc70699b006c5460c9f654cf345f/tp_c6c4ef98245b2956a8e3481fb27eb337 >[kubeexec] DEBUG 2018/06/08 09:46:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_af30399c48acd8c5470723926d83d4c5 > >Result: vg_d389f0278a774bd7443a09af960961d8/tp_af30399c48acd8c5470723926d83d4c5 >[negroni] Started GET /queue/9a32fb2da0f6f19e8e275255e90e3ef6 >[negroni] Completed 200 OK in 125.865µs >[kubeexec] DEBUG 2018/06/08 09:46:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_8f286a6ba640556678264621e234d19d > >Result: vg_3a4297677881963e3f80124971d50eea/tp_8f286a6ba640556678264621e234d19d >[negroni] Started GET /queue/9a32fb2da0f6f19e8e275255e90e3ef6 >[negroni] Completed 200 OK in 246.774µs >[kubeexec] DEBUG 2018/06/08 09:46:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_c6c4ef98245b2956a8e3481fb27eb337 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_af30399c48acd8c5470723926d83d4c5 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8f286a6ba640556678264621e234d19d >Result: >[negroni] Started GET /queue/9a32fb2da0f6f19e8e275255e90e3ef6 >[negroni] Completed 200 OK in 233.172µs >[kubeexec] DEBUG 2018/06/08 09:46:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_c6c4ef98245b2956a8e3481fb27eb337/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 09:46:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_af30399c48acd8c5470723926d83d4c5/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 09:46:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_8f286a6ba640556678264621e234d19d/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/9a32fb2da0f6f19e8e275255e90e3ef6 >[negroni] Completed 200 OK in 205.618µs >[kubeexec] DEBUG 2018/06/08 09:46:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_9394bc70699b006c5460c9f654cf345f-brick_c6c4ef98245b2956a8e3481fb27eb337 > >Result: Logical volume "brick_c6c4ef98245b2956a8e3481fb27eb337" successfully removed >[kubeexec] DEBUG 2018/06/08 09:46:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_d389f0278a774bd7443a09af960961d8-brick_af30399c48acd8c5470723926d83d4c5 > >Result: Logical volume "brick_af30399c48acd8c5470723926d83d4c5" successfully removed >[kubeexec] DEBUG 2018/06/08 09:46:41 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_3a4297677881963e3f80124971d50eea-brick_8f286a6ba640556678264621e234d19d > >Result: Logical volume "brick_8f286a6ba640556678264621e234d19d" successfully removed >[negroni] Started GET /queue/9a32fb2da0f6f19e8e275255e90e3ef6 >[negroni] Completed 200 OK in 123.932µs >[kubeexec] DEBUG 2018/06/08 09:46:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_9394bc70699b006c5460c9f654cf345f/tp_c6c4ef98245b2956a8e3481fb27eb337 > >Result: 0 >[kubeexec] DEBUG 2018/06/08 09:46:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_d389f0278a774bd7443a09af960961d8/tp_af30399c48acd8c5470723926d83d4c5 > >Result: 0 >[negroni] Started GET /queue/9a32fb2da0f6f19e8e275255e90e3ef6 >[negroni] Completed 200 OK in 161.236µs >[kubeexec] DEBUG 2018/06/08 09:46:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_3a4297677881963e3f80124971d50eea/tp_8f286a6ba640556678264621e234d19d > >Result: 0 >[kubeexec] DEBUG 2018/06/08 09:46:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_9394bc70699b006c5460c9f654cf345f/tp_c6c4ef98245b2956a8e3481fb27eb337 > >Result: Logical volume "tp_c6c4ef98245b2956a8e3481fb27eb337" successfully removed >[negroni] Started GET /queue/9a32fb2da0f6f19e8e275255e90e3ef6 >[negroni] Completed 200 OK in 175.817µs >[kubeexec] DEBUG 2018/06/08 09:46:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_d389f0278a774bd7443a09af960961d8/tp_af30399c48acd8c5470723926d83d4c5 > >Result: Logical volume "tp_af30399c48acd8c5470723926d83d4c5" successfully removed >[kubeexec] DEBUG 2018/06/08 09:46:43 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_3a4297677881963e3f80124971d50eea/tp_8f286a6ba640556678264621e234d19d > >Result: Logical volume "tp_8f286a6ba640556678264621e234d19d" successfully removed >[negroni] Started GET /queue/9a32fb2da0f6f19e8e275255e90e3ef6 >[negroni] Completed 200 OK in 129.444µs >[kubeexec] DEBUG 2018/06/08 09:46:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_9394bc70699b006c5460c9f654cf345f/brick_c6c4ef98245b2956a8e3481fb27eb337 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_af30399c48acd8c5470723926d83d4c5 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:44 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_3a4297677881963e3f80124971d50eea/brick_8f286a6ba640556678264621e234d19d >Result: >[heketi] INFO 2018/06/08 09:46:45 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 09:46:45 asynchttp.go:292: Completed job 9a32fb2da0f6f19e8e275255e90e3ef6 in 10.455956383s >[negroni] Started GET /queue/9a32fb2da0f6f19e8e275255e90e3ef6 >[negroni] Completed 204 No Content in 176.92µs >[negroni] Started POST /volumes >[heketi] INFO 2018/06/08 09:46:45 Allocating brick set #0 >[negroni] Completed 202 Accepted in 38.490411ms >[asynchttp] INFO 2018/06/08 09:46:45 asynchttp.go:288: Started job d64e6cee950d6dc6d84ef4dcbe1e8983 >[heketi] INFO 2018/06/08 09:46:45 Started async operation: Create Volume >[heketi] INFO 2018/06/08 09:46:45 Creating brick ef4fdbd02d533faf4524683816644ca7 >[heketi] INFO 2018/06/08 09:46:45 Creating brick 626a1c3abdca4c84f7933afe0b14ebd0 >[heketi] INFO 2018/06/08 09:46:45 Creating brick 33dec549534fa9947339ff13b2800c47 >[negroni] Started GET /queue/d64e6cee950d6dc6d84ef4dcbe1e8983 >[negroni] Completed 200 OK in 367.055µs >[kubeexec] DEBUG 2018/06/08 09:46:45 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir -p /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_626a1c3abdca4c84f7933afe0b14ebd0 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir -p /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_33dec549534fa9947339ff13b2800c47 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir -p /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ef4fdbd02d533faf4524683816644ca7 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_626a1c3abdca4c84f7933afe0b14ebd0 --virtualsize 1048576K --name brick_626a1c3abdca4c84f7933afe0b14ebd0 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_626a1c3abdca4c84f7933afe0b14ebd0" created. >[kubeexec] DEBUG 2018/06/08 09:46:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_96f1667f2f1ced2c5ef94772922be93b/tp_33dec549534fa9947339ff13b2800c47 --virtualsize 1048576K --name brick_33dec549534fa9947339ff13b2800c47 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_33dec549534fa9947339ff13b2800c47" created. >[kubeexec] DEBUG 2018/06/08 09:46:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_626a1c3abdca4c84f7933afe0b14ebd0 >Result: meta-data=/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_626a1c3abdca4c84f7933afe0b14ebd0 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 09:46:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_33dec549534fa9947339ff13b2800c47 >Result: meta-data=/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_33dec549534fa9947339ff13b2800c47 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/06/08 09:46:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: awk "BEGIN {print \"/dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_626a1c3abdca4c84f7933afe0b14ebd0 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_626a1c3abdca4c84f7933afe0b14ebd0 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 09:46:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_ef4fdbd02d533faf4524683816644ca7 --virtualsize 1048576K --name brick_ef4fdbd02d533faf4524683816644ca7 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_ef4fdbd02d533faf4524683816644ca7" created. >[kubeexec] DEBUG 2018/06/08 09:46:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: awk "BEGIN {print \"/dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_33dec549534fa9947339ff13b2800c47 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_33dec549534fa9947339ff13b2800c47 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 09:46:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_626a1c3abdca4c84f7933afe0b14ebd0 /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_626a1c3abdca4c84f7933afe0b14ebd0 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mkdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_626a1c3abdca4c84f7933afe0b14ebd0/brick >Result: >[kubeexec] DEBUG 2018/06/08 09:46:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_33dec549534fa9947339ff13b2800c47 /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_33dec549534fa9947339ff13b2800c47 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_ef4fdbd02d533faf4524683816644ca7 >Result: meta-data=/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_ef4fdbd02d533faf4524683816644ca7 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[negroni] Started GET /queue/d64e6cee950d6dc6d84ef4dcbe1e8983 >[negroni] Completed 200 OK in 180.098µs >[kubeexec] DEBUG 2018/06/08 09:46:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mkdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_33dec549534fa9947339ff13b2800c47/brick >Result: >[kubeexec] DEBUG 2018/06/08 09:46:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: awk "BEGIN {print \"/dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_ef4fdbd02d533faf4524683816644ca7 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ef4fdbd02d533faf4524683816644ca7 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/06/08 09:46:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_ef4fdbd02d533faf4524683816644ca7 /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ef4fdbd02d533faf4524683816644ca7 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mkdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ef4fdbd02d533faf4524683816644ca7/brick >Result: >[cmdexec] INFO 2018/06/08 09:46:47 Creating volume vol_643274fea1be16f2f01b0800d6211145 replica 3 >[kubeexec] DEBUG 2018/06/08 09:46:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume create vol_643274fea1be16f2f01b0800d6211145 replica 3 10.70.46.122:/var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ef4fdbd02d533faf4524683816644ca7/brick 10.70.47.76:/var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_33dec549534fa9947339ff13b2800c47/brick 10.70.46.187:/var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_626a1c3abdca4c84f7933afe0b14ebd0/brick >Result: volume create: vol_643274fea1be16f2f01b0800d6211145: success: please start the volume to access data >[negroni] Started GET /queue/d64e6cee950d6dc6d84ef4dcbe1e8983 >[negroni] Completed 200 OK in 184.697µs >[negroni] Started GET /queue/d64e6cee950d6dc6d84ef4dcbe1e8983 >[negroni] Completed 200 OK in 159.351µs >[kubeexec] DEBUG 2018/06/08 09:46:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script volume start vol_643274fea1be16f2f01b0800d6211145 >Result: volume start: vol_643274fea1be16f2f01b0800d6211145: success >[heketi] INFO 2018/06/08 09:46:49 Create Volume succeeded >[asynchttp] INFO 2018/06/08 09:46:49 asynchttp.go:292: Completed job d64e6cee950d6dc6d84ef4dcbe1e8983 in 3.896902935s >[negroni] Started GET /queue/d64e6cee950d6dc6d84ef4dcbe1e8983 >[negroni] Completed 303 See Other in 159.686µs >[negroni] Started GET /volumes/643274fea1be16f2f01b0800d6211145 >[negroni] Completed 200 OK in 4.474424ms >[negroni] Started DELETE /volumes/643274fea1be16f2f01b0800d6211145 >[negroni] Completed 202 Accepted in 9.284556ms >[asynchttp] INFO 2018/06/08 09:46:49 asynchttp.go:288: Started job 09cb5cee5e35b446c818941864fc4c7e >[heketi] INFO 2018/06/08 09:46:49 Started async operation: Delete Volume >[negroni] Started GET /queue/09cb5cee5e35b446c818941864fc4c7e >[negroni] Completed 200 OK in 109.53µs >[kubeexec] DEBUG 2018/06/08 09:46:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_643274fea1be16f2f01b0800d6211145 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/09cb5cee5e35b446c818941864fc4c7e >[negroni] Completed 200 OK in 136.973µs >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 12.15055ms >[asynchttp] INFO 2018/06/08 09:46:50 asynchttp.go:288: Started job df498c12a6d45ee13eb90ea777810002 >[heketi] INFO 2018/06/08 09:46:50 Started async operation: Delete Volume >[negroni] Started GET /queue/df498c12a6d45ee13eb90ea777810002 >[negroni] Completed 200 OK in 150.618µs >[kubeexec] DEBUG 2018/06/08 09:46:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[negroni] Started GET /queue/09cb5cee5e35b446c818941864fc4c7e >[negroni] Completed 200 OK in 194.17µs >[negroni] Started GET /queue/df498c12a6d45ee13eb90ea777810002 >[negroni] Completed 200 OK in 126.283µs >[negroni] Started GET /queue/09cb5cee5e35b446c818941864fc4c7e >[negroni] Completed 200 OK in 154.328µs >[negroni] Started GET /queue/df498c12a6d45ee13eb90ea777810002 >[negroni] Completed 200 OK in 133.027µs >[kubeexec] ERROR 2018/06/08 09:46:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:46:53 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] DEBUG 2018/06/08 09:46:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume stop vol_643274fea1be16f2f01b0800d6211145 force >Result: volume stop: vol_643274fea1be16f2f01b0800d6211145: success >[kubeexec] ERROR 2018/06/08 09:46:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:46:53 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:46:53 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:46:53 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:46:53 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:46:53 asynchttp.go:292: Completed job df498c12a6d45ee13eb90ea777810002 in 2.724129188s >[kubeexec] DEBUG 2018/06/08 09:46:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script volume delete vol_643274fea1be16f2f01b0800d6211145 >Result: volume delete: vol_643274fea1be16f2f01b0800d6211145: success >[heketi] INFO 2018/06/08 09:46:53 Deleting brick 33dec549534fa9947339ff13b2800c47 >[heketi] INFO 2018/06/08 09:46:53 Deleting brick ef4fdbd02d533faf4524683816644ca7 >[heketi] INFO 2018/06/08 09:46:53 Deleting brick 626a1c3abdca4c84f7933afe0b14ebd0 >[kubeexec] DEBUG 2018/06/08 09:46:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: mount | grep -w /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_33dec549534fa9947339ff13b2800c47 | cut -d" " -f1 >Result: /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_33dec549534fa9947339ff13b2800c47 >[kubeexec] DEBUG 2018/06/08 09:46:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: mount | grep -w /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ef4fdbd02d533faf4524683816644ca7 | cut -d" " -f1 >Result: /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_ef4fdbd02d533faf4524683816644ca7 >[kubeexec] DEBUG 2018/06/08 09:46:53 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: mount | grep -w /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_626a1c3abdca4c84f7933afe0b14ebd0 | cut -d" " -f1 >Result: /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_626a1c3abdca4c84f7933afe0b14ebd0 >[negroni] Started GET /queue/09cb5cee5e35b446c818941864fc4c7e >[negroni] Completed 200 OK in 155.967µs >[negroni] Started GET /queue/df498c12a6d45ee13eb90ea777810002 >[negroni] Completed 500 Internal Server Error in 119.555µs >[kubeexec] DEBUG 2018/06/08 09:46:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_33dec549534fa9947339ff13b2800c47 > >Result: vg_96f1667f2f1ced2c5ef94772922be93b/tp_33dec549534fa9947339ff13b2800c47 >[kubeexec] DEBUG 2018/06/08 09:46:54 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_ef4fdbd02d533faf4524683816644ca7 > >Result: vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_ef4fdbd02d533faf4524683816644ca7 >[negroni] Started GET /queue/09cb5cee5e35b446c818941864fc4c7e >[negroni] Completed 200 OK in 290.059µs >[kubeexec] DEBUG 2018/06/08 09:46:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --separator=/ -ovg_name,pool_lv /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_626a1c3abdca4c84f7933afe0b14ebd0 > >Result: vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_626a1c3abdca4c84f7933afe0b14ebd0 >[kubeexec] DEBUG 2018/06/08 09:46:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: umount /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_33dec549534fa9947339ff13b2800c47 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:55 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: umount /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ef4fdbd02d533faf4524683816644ca7 >Result: >[negroni] Started GET /queue/09cb5cee5e35b446c818941864fc4c7e >[negroni] Completed 200 OK in 164.036µs >[kubeexec] DEBUG 2018/06/08 09:46:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: umount /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_626a1c3abdca4c84f7933afe0b14ebd0 >Result: >[kubeexec] DEBUG 2018/06/08 09:46:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: sed -i.save "/brick_33dec549534fa9947339ff13b2800c47/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/06/08 09:46:56 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: sed -i.save "/brick_ef4fdbd02d533faf4524683816644ca7/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/09cb5cee5e35b446c818941864fc4c7e >[negroni] Completed 200 OK in 256.325µs >[kubeexec] DEBUG 2018/06/08 09:46:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: sed -i.save "/brick_626a1c3abdca4c84f7933afe0b14ebd0/d" /var/lib/heketi/fstab >Result: >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 19.085366ms >[asynchttp] INFO 2018/06/08 09:46:57 asynchttp.go:288: Started job 8bdb0827c4209c0fd92fd7cbc9a69130 >[heketi] INFO 2018/06/08 09:46:57 Started async operation: Delete Volume >[negroni] Started GET /queue/8bdb0827c4209c0fd92fd7cbc9a69130 >[negroni] Completed 200 OK in 103.112µs >[kubeexec] DEBUG 2018/06/08 09:46:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f /dev/mapper/vg_96f1667f2f1ced2c5ef94772922be93b-brick_33dec549534fa9947339ff13b2800c47 > >Result: Logical volume "brick_33dec549534fa9947339ff13b2800c47" successfully removed >[negroni] Started GET /queue/09cb5cee5e35b446c818941864fc4c7e >[negroni] Completed 200 OK in 140.473µs >[kubeexec] DEBUG 2018/06/08 09:46:57 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f /dev/mapper/vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_ef4fdbd02d533faf4524683816644ca7 > >Result: Logical volume "brick_ef4fdbd02d533faf4524683816644ca7" successfully removed >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 7.56993ms >[negroni] Started GET /volumes/07ef5105131fa51c35a9007ee213ea7a >[negroni] Completed 200 OK in 1.898154ms >[negroni] Started GET /volumes/15e0122e942fc41f80666a3714670682 >[negroni] Completed 200 OK in 1.533247ms >[negroni] Started GET /volumes/1918777ef3ce84df17c8a114fb89f33e >[negroni] Completed 200 OK in 1.338812ms >[negroni] Started GET /volumes/1a828ff2310b778d09ffdadd755dc5ee >[negroni] Completed 200 OK in 1.250222ms >[negroni] Started GET /volumes/1ef58be42cf8ea7cf9298cff303e903a >[negroni] Completed 200 OK in 1.571701ms >[negroni] Started GET /volumes/21481d8911fe8ec238d97d71c1aa5cb3 >[negroni] Completed 200 OK in 823.728µs >[negroni] Started GET /volumes/226838416791f3286fcacb7e5f1ff59d >[negroni] Completed 200 OK in 1.256992ms >[negroni] Started GET /volumes/2bf097c60bd8b38bfcb4327727ca5681 >[negroni] Completed 200 OK in 1.489248ms >[negroni] Started GET /volumes/337bf2c01bf8c45eec5bab53ad5c2e46 >[negroni] Completed 200 OK in 1.467833ms >[negroni] Started GET /volumes/43194b98d83e8b61b376ccc54f79333a >[negroni] Completed 200 OK in 1.172019ms >[negroni] Started GET /volumes/5d0dfb0ebb846fcd225c890ec9cdb885 >[negroni] Completed 200 OK in 1.225199ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.484004ms >[negroni] Started GET /volumes/643274fea1be16f2f01b0800d6211145 >[negroni] Completed 200 OK in 549.884µs >[negroni] Started GET /volumes/7d5a429f821efc8e8fe3f29569732b86 >[negroni] Completed 200 OK in 1.207518ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 857.996µs >[negroni] Started GET /volumes/8757685f765bbd74556cbf75086c88f6 >[negroni] Completed 200 OK in 801.35µs >[negroni] Started GET /volumes/89ebb1e7eed2ff557488996a1657e75e >[negroni] Completed 200 OK in 1.128875ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 774.983µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 549.207µs >[negroni] Started GET /volumes/9f9fb17746d0aad637b132875d2744e5 >[negroni] Completed 200 OK in 555.501µs >[negroni] Started GET /volumes/9fb2830da79dd70d910dad8426dc236f >[negroni] Completed 200 OK in 525.833µs >[negroni] Started GET /volumes/a6be87754541710c38b420381c76fb8c >[negroni] Completed 200 OK in 1.734357ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 638.664µs >[negroni] Started GET /volumes/a8678c97e2708cf6e00aea160a4d46a0 >[negroni] Completed 200 OK in 601.029µs >[negroni] Started GET /volumes/ad1e5849e9566f1bcaa09cfb9c0b96ef >[negroni] Completed 200 OK in 534.107µs >[negroni] Started GET /volumes/aee5088d2304cb95535752ef85f9f392 >[negroni] Completed 200 OK in 1.037091ms >[negroni] Started GET /volumes/afef5f4a84abec3ab3e4f2a5bae2db23 >[negroni] Completed 200 OK in 581.634µs >[negroni] Started GET /volumes/b7fb8f19c77039982909b8868f7af4cc >[negroni] Completed 200 OK in 509.715µs >[negroni] Started GET /volumes/b8f78f128f20b61ef032ce9ee5b6481c >[negroni] Completed 200 OK in 941.381µs >[negroni] Started GET /volumes/b99532640a5201d243193159ee762ae4 >[negroni] Completed 200 OK in 501.622µs >[negroni] Started GET /volumes/bf31af76ef6c54e7e8f24f4d8711cb22 >[negroni] Completed 200 OK in 511.191µs >[negroni] Started GET /volumes/c1dce5388e8c136a89e4a25e4cc97821 >[negroni] Completed 200 OK in 615.044µs >[negroni] Started GET /volumes/cc8a686464bb4017c91ba7294ee1b091 >[negroni] Completed 200 OK in 577.149µs >[negroni] Started GET /volumes/d21ed39ae095af2674175a798a0cb02c >[negroni] Completed 200 OK in 520.09µs >[negroni] Started GET /volumes/daf13b7f607d1b280c78e909af25a215 >[negroni] Completed 200 OK in 965.515µs >[negroni] Started GET /volumes/dc9ab13a25ccbad8262fba92766a31f9 >[negroni] Completed 200 OK in 545.744µs >[negroni] Started GET /volumes/dcaf25d0becadd0bb3c732c2c2ca27da >[negroni] Completed 200 OK in 500.812µs >[negroni] Started GET /volumes/e25438438fd2d50a0b07f26b4bfb338a >[negroni] Completed 200 OK in 480.268µs >[negroni] Started GET /volumes/f73270331b95278f490fd1dfe0b010df >[negroni] Completed 200 OK in 507.865µs >[kubeexec] DEBUG 2018/06/08 09:46:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f /dev/mapper/vg_b861af9cc67f5bf8d5a8f9e38f56b14e-brick_626a1c3abdca4c84f7933afe0b14ebd0 > >Result: Logical volume "brick_626a1c3abdca4c84f7933afe0b14ebd0" successfully removed >[negroni] Started GET /queue/8bdb0827c4209c0fd92fd7cbc9a69130 >[negroni] Completed 200 OK in 115.145µs >[kubeexec] DEBUG 2018/06/08 09:46:58 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[negroni] Started GET /queue/09cb5cee5e35b446c818941864fc4c7e >[negroni] Completed 200 OK in 137.344µs >[kubeexec] DEBUG 2018/06/08 09:46:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvs --noheadings --options=thin_count vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_ef4fdbd02d533faf4524683816644ca7 > >Result: 0 >[negroni] Started GET /queue/8bdb0827c4209c0fd92fd7cbc9a69130 >[negroni] Completed 200 OK in 238.673µs >[kubeexec] DEBUG 2018/06/08 09:46:59 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvs --noheadings --options=thin_count vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_626a1c3abdca4c84f7933afe0b14ebd0 > >Result: 0 >[negroni] Started GET /queue/09cb5cee5e35b446c818941864fc4c7e >[negroni] Completed 200 OK in 279.714µs >[kubeexec] DEBUG 2018/06/08 09:47:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvs --noheadings --options=thin_count vg_96f1667f2f1ced2c5ef94772922be93b/tp_33dec549534fa9947339ff13b2800c47 > >Result: 0 >[negroni] Started GET /queue/8bdb0827c4209c0fd92fd7cbc9a69130 >[negroni] Completed 200 OK in 105.455µs >[kubeexec] DEBUG 2018/06/08 09:47:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: lvremove --autobackup=n -f vg_66a9af9f6bac95f9d8d556a2f14c29d3/tp_ef4fdbd02d533faf4524683816644ca7 > >Result: Logical volume "tp_ef4fdbd02d533faf4524683816644ca7" successfully removed >[kubeexec] DEBUG 2018/06/08 09:47:00 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: lvremove --autobackup=n -f vg_b861af9cc67f5bf8d5a8f9e38f56b14e/tp_626a1c3abdca4c84f7933afe0b14ebd0 > >Result: Logical volume "tp_626a1c3abdca4c84f7933afe0b14ebd0" successfully removed >[negroni] Started GET /queue/09cb5cee5e35b446c818941864fc4c7e >[negroni] Completed 200 OK in 138.81µs >[kubeexec] ERROR 2018/06/08 09:47:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:47:01 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[negroni] Started GET /queue/8bdb0827c4209c0fd92fd7cbc9a69130 >[negroni] Completed 200 OK in 101.665µs >[kubeexec] DEBUG 2018/06/08 09:47:01 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: rmdir /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_ef4fdbd02d533faf4524683816644ca7 >Result: >[negroni] Started GET /queue/09cb5cee5e35b446c818941864fc4c7e >[negroni] Completed 200 OK in 144.173µs >[kubeexec] DEBUG 2018/06/08 09:47:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: rmdir /var/lib/heketi/mounts/vg_b861af9cc67f5bf8d5a8f9e38f56b14e/brick_626a1c3abdca4c84f7933afe0b14ebd0 >Result: >[kubeexec] DEBUG 2018/06/08 09:47:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: lvremove --autobackup=n -f vg_96f1667f2f1ced2c5ef94772922be93b/tp_33dec549534fa9947339ff13b2800c47 > >Result: Logical volume "tp_33dec549534fa9947339ff13b2800c47" successfully removed >[negroni] Started GET /queue/8bdb0827c4209c0fd92fd7cbc9a69130 >[negroni] Completed 200 OK in 138.646µs >[kubeexec] ERROR 2018/06/08 09:47:02 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:47:02 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:47:02 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:47:02 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:47:02 asynchttp.go:292: Completed job 8bdb0827c4209c0fd92fd7cbc9a69130 in 5.21965916s >[heketi] ERROR 2018/06/08 09:47:02 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[negroni] Started GET /queue/09cb5cee5e35b446c818941864fc4c7e >[negroni] Completed 200 OK in 196.283µs >[kubeexec] DEBUG 2018/06/08 09:47:03 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: rmdir /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_33dec549534fa9947339ff13b2800c47 >Result: >[heketi] INFO 2018/06/08 09:47:03 Delete Volume succeeded >[asynchttp] INFO 2018/06/08 09:47:03 asynchttp.go:292: Completed job 09cb5cee5e35b446c818941864fc4c7e in 13.09145499s >[negroni] Started GET /queue/8bdb0827c4209c0fd92fd7cbc9a69130 >[negroni] Completed 500 Internal Server Error in 166.88µs >[negroni] Started GET /queue/09cb5cee5e35b446c818941864fc4c7e >[negroni] Completed 204 No Content in 173.293µs >[heketi] INFO 2018/06/08 09:47:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:47:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:47:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 3h 23min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ22979 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:47:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:47:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:47:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 3h 23min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ24728 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:47:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:47:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:47:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 2h 21min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ22654 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:47:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:47:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/vol_a8678c97e2708cf6e00aea160a4d46a0 >[heketi] WARNING 2018/06/08 09:47:36 Invalid path or request /volumes/vol_a8678c97e2708cf6e00aea160a4d46a0 >[negroni] Completed 404 Not Found in 300.201µs >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 202 Accepted in 27.100325ms >[asynchttp] INFO 2018/06/08 09:49:06 asynchttp.go:288: Started job e7e5e1abac474812ec612c70e75804fb >[heketi] INFO 2018/06/08 09:49:06 Started async operation: Delete Volume >[negroni] Started GET /queue/e7e5e1abac474812ec612c70e75804fb >[negroni] Completed 200 OK in 163.978µs >[negroni] Completed 202 Accepted in 50.173801ms >[asynchttp] INFO 2018/06/08 09:49:06 asynchttp.go:288: Started job 5af0c4eec3041a9f85e5ad13ea51c801 >[heketi] INFO 2018/06/08 09:49:06 Started async operation: Delete Volume >[negroni] Started GET /queue/5af0c4eec3041a9f85e5ad13ea51c801 >[negroni] Completed 200 OK in 126.372µs >[kubeexec] DEBUG 2018/06/08 09:49:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] DEBUG 2018/06/08 09:49:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:49:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:49:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:49:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:49:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:49:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:49:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:49:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:49:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:49:06 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:49:06 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:49:06 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:49:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:49:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:49:06 asynchttp.go:292: Completed job e7e5e1abac474812ec612c70e75804fb in 781.891459ms >[asynchttp] INFO 2018/06/08 09:49:06 asynchttp.go:292: Completed job 5af0c4eec3041a9f85e5ad13ea51c801 in 772.64938ms >[heketi] ERROR 2018/06/08 09:49:06 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[negroni] Started GET /queue/e7e5e1abac474812ec612c70e75804fb >[negroni] Completed 500 Internal Server Error in 158.446µs >[negroni] Started GET /queue/5af0c4eec3041a9f85e5ad13ea51c801 >[negroni] Completed 500 Internal Server Error in 186.129µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 11.789289ms >[negroni] Started GET /volumes/07ef5105131fa51c35a9007ee213ea7a >[negroni] Completed 200 OK in 3.25035ms >[negroni] Started GET /volumes/15e0122e942fc41f80666a3714670682 >[negroni] Completed 200 OK in 2.228214ms >[negroni] Started GET /volumes/1918777ef3ce84df17c8a114fb89f33e >[negroni] Completed 200 OK in 2.375436ms >[negroni] Started GET /volumes/1a828ff2310b778d09ffdadd755dc5ee >[negroni] Completed 200 OK in 1.867887ms >[negroni] Started GET /volumes/1ef58be42cf8ea7cf9298cff303e903a >[negroni] Completed 200 OK in 1.845809ms >[negroni] Started GET /volumes/21481d8911fe8ec238d97d71c1aa5cb3 >[negroni] Completed 200 OK in 972.409µs >[negroni] Started GET /volumes/226838416791f3286fcacb7e5f1ff59d >[negroni] Completed 200 OK in 1.773314ms >[negroni] Started GET /volumes/2bf097c60bd8b38bfcb4327727ca5681 >[negroni] Completed 200 OK in 1.896ms >[negroni] Started GET /volumes/337bf2c01bf8c45eec5bab53ad5c2e46 >[negroni] Completed 200 OK in 1.873472ms >[negroni] Started GET /volumes/43194b98d83e8b61b376ccc54f79333a >[negroni] Completed 200 OK in 1.24484ms >[negroni] Started GET /volumes/5d0dfb0ebb846fcd225c890ec9cdb885 >[negroni] Completed 200 OK in 1.788782ms >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 1.931802ms >[negroni] Started GET /volumes/7d5a429f821efc8e8fe3f29569732b86 >[negroni] Completed 200 OK in 1.414164ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.150463ms >[negroni] Started GET /volumes/8757685f765bbd74556cbf75086c88f6 >[negroni] Completed 200 OK in 1.071672ms >[negroni] Started GET /volumes/89ebb1e7eed2ff557488996a1657e75e >[negroni] Completed 200 OK in 1.389358ms >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.884792ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 2.192383ms >[negroni] Started GET /volumes/9f9fb17746d0aad637b132875d2744e5 >[negroni] Completed 200 OK in 917.978µs >[negroni] Started GET /volumes/9fb2830da79dd70d910dad8426dc236f >[negroni] Completed 200 OK in 751.522µs >[negroni] Started GET /volumes/a6be87754541710c38b420381c76fb8c >[negroni] Completed 200 OK in 2.879309ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.235781ms >[negroni] Started GET /volumes/a8678c97e2708cf6e00aea160a4d46a0 >[negroni] Completed 200 OK in 959.558µs >[negroni] Started GET /volumes/ad1e5849e9566f1bcaa09cfb9c0b96ef >[negroni] Completed 200 OK in 877.982µs >[negroni] Started GET /volumes/aee5088d2304cb95535752ef85f9f392 >[negroni] Completed 200 OK in 1.702973ms >[negroni] Started GET /volumes/afef5f4a84abec3ab3e4f2a5bae2db23 >[negroni] Completed 200 OK in 618.237µs >[negroni] Started GET /volumes/b7fb8f19c77039982909b8868f7af4cc >[negroni] Completed 200 OK in 636.048µs >[negroni] Started GET /volumes/b8f78f128f20b61ef032ce9ee5b6481c >[negroni] Completed 200 OK in 1.295152ms >[negroni] Started GET /volumes/b99532640a5201d243193159ee762ae4 >[negroni] Completed 200 OK in 673.668µs >[negroni] Started GET /volumes/bf31af76ef6c54e7e8f24f4d8711cb22 >[negroni] Completed 200 OK in 652.896µs >[negroni] Started GET /volumes/c1dce5388e8c136a89e4a25e4cc97821 >[negroni] Completed 200 OK in 667.489µs >[negroni] Started GET /volumes/cc8a686464bb4017c91ba7294ee1b091 >[negroni] Completed 200 OK in 635.14µs >[negroni] Started GET /volumes/d21ed39ae095af2674175a798a0cb02c >[negroni] Completed 200 OK in 870.208µs >[negroni] Started GET /volumes/daf13b7f607d1b280c78e909af25a215 >[negroni] Completed 200 OK in 1.473562ms >[negroni] Started GET /volumes/dc9ab13a25ccbad8262fba92766a31f9 >[negroni] Completed 200 OK in 663.517µs >[negroni] Started GET /volumes/dcaf25d0becadd0bb3c732c2c2ca27da >[negroni] Completed 200 OK in 865.485µs >[negroni] Started GET /volumes/e25438438fd2d50a0b07f26b4bfb338a >[negroni] Completed 200 OK in 861.116µs >[negroni] Started GET /volumes/f73270331b95278f490fd1dfe0b010df >[negroni] Completed 200 OK in 732.657µs >[heketi] INFO 2018/06/08 09:49:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:49:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:49:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 3h 25min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ22979 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:49:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:49:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:49:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 3h 25min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ24728 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:49:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:49:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:49:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 2h 23min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ22654 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:49:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:49:14 Cleaned 0 nodes from health cache >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 323.452µs >[negroni] Started GET /volumes/07ef5105131fa51c35a9007ee213ea7a >[negroni] Completed 200 OK in 715.323µs >[negroni] Started GET /volumes/15e0122e942fc41f80666a3714670682 >[negroni] Completed 200 OK in 1.045896ms >[negroni] Started GET /volumes/1918777ef3ce84df17c8a114fb89f33e >[negroni] Completed 200 OK in 641.228µs >[negroni] Started GET /volumes/1a828ff2310b778d09ffdadd755dc5ee >[negroni] Completed 200 OK in 580.478µs >[negroni] Started GET /volumes/1ef58be42cf8ea7cf9298cff303e903a >[negroni] Completed 200 OK in 642.616µs >[negroni] Started GET /volumes/21481d8911fe8ec238d97d71c1aa5cb3 >[negroni] Completed 200 OK in 635.816µs >[negroni] Started GET /volumes/226838416791f3286fcacb7e5f1ff59d >[negroni] Completed 200 OK in 1.041199ms >[negroni] Started GET /volumes/2bf097c60bd8b38bfcb4327727ca5681 >[negroni] Completed 200 OK in 1.037963ms >[negroni] Started GET /volumes/337bf2c01bf8c45eec5bab53ad5c2e46 >[negroni] Completed 200 OK in 564.311µs >[negroni] Started GET /volumes/43194b98d83e8b61b376ccc54f79333a >[negroni] Completed 200 OK in 608.032µs >[negroni] Started GET /volumes/5d0dfb0ebb846fcd225c890ec9cdb885 >[negroni] Completed 200 OK in 600.062µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 615.907µs >[negroni] Started GET /volumes/7d5a429f821efc8e8fe3f29569732b86 >[negroni] Completed 200 OK in 1.058543ms >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 605.04µs >[negroni] Started GET /volumes/8757685f765bbd74556cbf75086c88f6 >[negroni] Completed 200 OK in 1.034782ms >[negroni] Started GET /volumes/89ebb1e7eed2ff557488996a1657e75e >[negroni] Completed 200 OK in 607.03µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 633.957µs >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 948.933µs >[negroni] Started GET /volumes/9f9fb17746d0aad637b132875d2744e5 >[negroni] Completed 200 OK in 1.025472ms >[negroni] Started GET /volumes/9fb2830da79dd70d910dad8426dc236f >[negroni] Completed 200 OK in 1.537436ms >[negroni] Started GET /volumes/a6be87754541710c38b420381c76fb8c >[negroni] Completed 200 OK in 891.484µs >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 1.066324ms >[negroni] Started GET /volumes/a8678c97e2708cf6e00aea160a4d46a0 >[negroni] Completed 200 OK in 668.218µs >[negroni] Started GET /volumes/ad1e5849e9566f1bcaa09cfb9c0b96ef >[negroni] Completed 200 OK in 606.162µs >[negroni] Started GET /volumes/aee5088d2304cb95535752ef85f9f392 >[negroni] Completed 200 OK in 601.849µs >[negroni] Started GET /volumes/afef5f4a84abec3ab3e4f2a5bae2db23 >[negroni] Completed 200 OK in 630.418µs >[negroni] Started GET /volumes/b7fb8f19c77039982909b8868f7af4cc >[negroni] Completed 200 OK in 613.773µs >[negroni] Started GET /volumes/b8f78f128f20b61ef032ce9ee5b6481c >[negroni] Completed 200 OK in 660.421µs >[negroni] Started GET /volumes/b99532640a5201d243193159ee762ae4 >[negroni] Completed 200 OK in 608.844µs >[negroni] Started GET /volumes/bf31af76ef6c54e7e8f24f4d8711cb22 >[negroni] Completed 200 OK in 612.336µs >[negroni] Started GET /volumes/c1dce5388e8c136a89e4a25e4cc97821 >[negroni] Completed 200 OK in 605.474µs >[negroni] Started GET /volumes/cc8a686464bb4017c91ba7294ee1b091 >[negroni] Completed 200 OK in 584.536µs >[negroni] Started GET /volumes/d21ed39ae095af2674175a798a0cb02c >[negroni] Completed 200 OK in 775.373µs >[negroni] Started GET /volumes/daf13b7f607d1b280c78e909af25a215 >[negroni] Completed 200 OK in 754.699µs >[negroni] Started GET /volumes/dc9ab13a25ccbad8262fba92766a31f9 >[negroni] Completed 200 OK in 697.176µs >[negroni] Started GET /volumes/dcaf25d0becadd0bb3c732c2c2ca27da >[negroni] Completed 200 OK in 735.925µs >[negroni] Started GET /volumes/e25438438fd2d50a0b07f26b4bfb338a >[negroni] Completed 200 OK in 586.822µs >[negroni] Started GET /volumes/f73270331b95278f490fd1dfe0b010df >[negroni] Completed 200 OK in 614.001µs >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 254.711µs >[negroni] Started GET /volumes/07ef5105131fa51c35a9007ee213ea7a >[negroni] Completed 200 OK in 698.57µs >[negroni] Started GET /volumes/15e0122e942fc41f80666a3714670682 >[negroni] Completed 200 OK in 546.377µs >[negroni] Started GET /volumes/1918777ef3ce84df17c8a114fb89f33e >[negroni] Completed 200 OK in 668.887µs >[negroni] Started GET /volumes/1a828ff2310b778d09ffdadd755dc5ee >[negroni] Completed 200 OK in 651.281µs >[negroni] Started GET /volumes/1ef58be42cf8ea7cf9298cff303e903a >[negroni] Completed 200 OK in 1.116412ms >[negroni] Started GET /volumes/21481d8911fe8ec238d97d71c1aa5cb3 >[negroni] Completed 200 OK in 1.069092ms >[negroni] Started GET /volumes/226838416791f3286fcacb7e5f1ff59d >[negroni] Completed 200 OK in 1.720524ms >[negroni] Started GET /volumes/2bf097c60bd8b38bfcb4327727ca5681 >[negroni] Completed 200 OK in 1.085712ms >[negroni] Started GET /volumes/337bf2c01bf8c45eec5bab53ad5c2e46 >[negroni] Completed 200 OK in 582.095µs >[negroni] Started GET /volumes/43194b98d83e8b61b376ccc54f79333a >[negroni] Completed 200 OK in 643.011µs >[negroni] Started GET /volumes/5d0dfb0ebb846fcd225c890ec9cdb885 >[negroni] Completed 200 OK in 608.419µs >[negroni] Started GET /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Completed 200 OK in 581.241µs >[negroni] Started GET /volumes/7d5a429f821efc8e8fe3f29569732b86 >[negroni] Completed 200 OK in 637.855µs >[negroni] Started GET /volumes/82bdce27665c94e646139a23b29e3033 >[negroni] Completed 200 OK in 1.176005ms >[negroni] Started GET /volumes/8757685f765bbd74556cbf75086c88f6 >[negroni] Completed 200 OK in 1.305075ms >[negroni] Started GET /volumes/89ebb1e7eed2ff557488996a1657e75e >[negroni] Completed 200 OK in 694.716µs >[negroni] Started GET /volumes/9c7d5f0f473cce6914804135f0b8ddcd >[negroni] Completed 200 OK in 1.034189ms >[negroni] Started GET /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 200 OK in 907.402µs >[negroni] Started GET /volumes/9f9fb17746d0aad637b132875d2744e5 >[negroni] Completed 200 OK in 910.897µs >[negroni] Started GET /volumes/9fb2830da79dd70d910dad8426dc236f >[negroni] Completed 200 OK in 714.808µs >[negroni] Started GET /volumes/a6be87754541710c38b420381c76fb8c >[negroni] Completed 200 OK in 1.669598ms >[negroni] Started GET /volumes/a7c445e486a69e73d8a54e3278593185 >[negroni] Completed 200 OK in 782.228µs >[negroni] Started GET /volumes/a8678c97e2708cf6e00aea160a4d46a0 >[negroni] Completed 200 OK in 931.088µs >[negroni] Started GET /volumes/ad1e5849e9566f1bcaa09cfb9c0b96ef >[negroni] Completed 200 OK in 1.287347ms >[negroni] Started GET /volumes/aee5088d2304cb95535752ef85f9f392 >[negroni] Completed 200 OK in 1.016548ms >[negroni] Started GET /volumes/afef5f4a84abec3ab3e4f2a5bae2db23 >[negroni] Completed 200 OK in 1.032693ms >[negroni] Started GET /volumes/b7fb8f19c77039982909b8868f7af4cc >[negroni] Completed 200 OK in 1.052985ms >[negroni] Started GET /volumes/b8f78f128f20b61ef032ce9ee5b6481c >[negroni] Completed 200 OK in 1.039019ms >[negroni] Started GET /volumes/b99532640a5201d243193159ee762ae4 >[negroni] Completed 200 OK in 846.804µs >[negroni] Started GET /volumes/bf31af76ef6c54e7e8f24f4d8711cb22 >[negroni] Completed 200 OK in 732.922µs >[negroni] Started GET /volumes/c1dce5388e8c136a89e4a25e4cc97821 >[negroni] Completed 200 OK in 544.854µs >[negroni] Started GET /volumes/cc8a686464bb4017c91ba7294ee1b091 >[negroni] Completed 200 OK in 576.89µs >[negroni] Started GET /volumes/d21ed39ae095af2674175a798a0cb02c >[negroni] Completed 200 OK in 620.681µs >[negroni] Started GET /volumes/daf13b7f607d1b280c78e909af25a215 >[negroni] Completed 200 OK in 580.201µs >[negroni] Started GET /volumes/dc9ab13a25ccbad8262fba92766a31f9 >[negroni] Completed 200 OK in 580.165µs >[negroni] Started GET /volumes/dcaf25d0becadd0bb3c732c2c2ca27da >[negroni] Completed 200 OK in 591.495µs >[negroni] Started GET /volumes/e25438438fd2d50a0b07f26b4bfb338a >[negroni] Completed 200 OK in 622.42µs >[negroni] Started GET /volumes/f73270331b95278f490fd1dfe0b010df >[negroni] Completed 200 OK in 574.18µs >[heketi] INFO 2018/06/08 09:51:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:51:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:51:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 3h 27min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ22979 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:51:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:51:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:51:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 3h 27min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ24728 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:51:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:51:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:51:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 2h 25min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ22654 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:51:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:51:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 17.638262ms >[asynchttp] INFO 2018/06/08 09:51:20 asynchttp.go:288: Started job f3ae4c6e96e94eefdfa0162e5f6c7bce >[heketi] INFO 2018/06/08 09:51:20 Started async operation: Delete Volume >[negroni] Started GET /queue/f3ae4c6e96e94eefdfa0162e5f6c7bce >[negroni] Completed 200 OK in 220.329µs >[negroni] Completed 202 Accepted in 43.291398ms >[asynchttp] INFO 2018/06/08 09:51:21 asynchttp.go:288: Started job 0c8c6b6bbcc0c09db45f0a14863f2c6a >[heketi] INFO 2018/06/08 09:51:21 Started async operation: Delete Volume >[negroni] Started GET /queue/0c8c6b6bbcc0c09db45f0a14863f2c6a >[negroni] Completed 200 OK in 89.435µs >[kubeexec] DEBUG 2018/06/08 09:51:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] DEBUG 2018/06/08 09:51:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:51:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:51:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:51:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:51:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:51:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:51:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:51:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:51:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:51:21 asynchttp.go:292: Completed job f3ae4c6e96e94eefdfa0162e5f6c7bce in 778.183068ms >[heketi] ERROR 2018/06/08 09:51:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:51:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:51:21 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:51:21 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:51:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:51:21 asynchttp.go:292: Completed job 0c8c6b6bbcc0c09db45f0a14863f2c6a in 778.00841ms >[heketi] ERROR 2018/06/08 09:51:21 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[negroni] Started GET /queue/f3ae4c6e96e94eefdfa0162e5f6c7bce >[negroni] Completed 500 Internal Server Error in 199.242µs >[negroni] Started GET /queue/0c8c6b6bbcc0c09db45f0a14863f2c6a >[negroni] Completed 500 Internal Server Error in 210.333µs >[negroni] Started DELETE /volumes/vol_a8678c97e2708cf6e00aea160a4d46a0 >[heketi] WARNING 2018/06/08 09:53:12 Invalid path or request /volumes/vol_a8678c97e2708cf6e00aea160a4d46a0 >[negroni] Completed 404 Not Found in 194.067µs >[heketi] INFO 2018/06/08 09:53:14 Starting Node Health Status refresh >[cmdexec] INFO 2018/06/08 09:53:14 Check Glusterd service status in node dhcp46-187.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:53:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-187.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh2m Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:33 UTC; 3h 29min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fed567_6ae4_11e8_bc4b_005056a5f18a.slice/docker-a506d9aaca4b440d184df40d01e90b2040811c8090bf4c19d892081307b28c0c.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1228 /usr/sbin/glusterfsd -s 10.70.46.187 --volfile-id heketidbstorage.10.70.46.187.var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.187-var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.pid -S /var/run/gluster/c08898c2e3b0190c62df3a9d4a013c92.socket --brick-name /var/lib/heketi/mounts/vg_d389f0278a774bd7443a09af960961d8/brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_d389f0278a774bd7443a09af960961d8-brick_a7665c2ce0e7d81c9d3b9e4e2fb0e506-brick.log --xlator-option *-posix.glusterd-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ22979 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/8fb729a0bbd51b204491c603c84acaa4.socket --xlator-option *replicate*.node-uuid=aa138bf5-fef6-494f-884c-a949f7f1f034 > >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Unit glusterd.service entered failed state. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: glusterd.service failed. >Jun 08 06:23:31 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:33 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:53:14 Periodic health check status: node 278bd6b4e16a8e62ef15aaae22e6abc1 up=true >[cmdexec] INFO 2018/06/08 09:53:14 Check Glusterd service status in node dhcp46-122.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:53:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 06:23:42 UTC; 3h 29min ago > Process: 432 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 433 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4900c968_6ae4_11e8_bc4b_005056a5f18a.slice/docker-438fb5fb16a82a3e701b28fa1f1ef8623dc37ca06374c604e5781a2b2ab2e06d.scope/system.slice/glusterd.service > ââ 433 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1106 /usr/sbin/glusterfsd -s 10.70.46.122 --volfile-id heketidbstorage.10.70.46.122.var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.122-var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.pid -S /var/run/gluster/9fa3ac3b12826912287adc88c26ac134.socket --brick-name /var/lib/heketi/mounts/vg_66a9af9f6bac95f9d8d556a2f14c29d3/brick_29e5b36d921cca1b0699871d9727c932/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_66a9af9f6bac95f9d8d556a2f14c29d3-brick_29e5b36d921cca1b0699871d9727c932-brick.log --xlator-option *-posix.glusterd-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ24728 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f8994cdb09351d83e70da08261d99c64.socket --xlator-option *replicate*.node-uuid=8b7c7b5c-3c70-4970-9fb7-e4805e6b65f0 > >Jun 08 06:23:40 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 06:23:42 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >[heketi] INFO 2018/06/08 09:53:14 Periodic health check status: node 70423cc0bcd044fe5ba8bbbd256a3e49 up=true >[cmdexec] INFO 2018/06/08 09:53:14 Check Glusterd service status in node dhcp47-76.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/06/08 09:53:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Fri 2018-06-08 07:25:20 UTC; 2h 27min ago > Process: 828 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 830 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cb2299_6aec_11e8_bc4b_005056a5f18a.slice/docker-7f9223abc9a8641cea68f374f4039235515978ee2dab970ac6cbef182c535426.scope/system.slice/glusterd.service > ââ 830 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 1197 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id heketidbstorage.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.pid -S /var/run/gluster/b6976f9ddc3abc90e1dfa91c35a7c611.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_54c06cdf52c6fe1e3a3f8a9d39cde33d/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_54c06cdf52c6fe1e3a3f8a9d39cde33d-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ13734 /usr/sbin/glusterfsd -s 10.70.47.76 --volfile-id vol_bf31af76ef6c54e7e8f24f4d8711cb22.10.70.47.76.var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick -p /var/run/gluster/vols/vol_bf31af76ef6c54e7e8f24f4d8711cb22/10.70.47.76-var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.pid -S /var/run/gluster/7aabe8187d4b998d498dd5d4edbd2626.socket --brick-name /var/lib/heketi/mounts/vg_96f1667f2f1ced2c5ef94772922be93b/brick_c6935061d1c9dbb05f316e3d87080a38/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_96f1667f2f1ced2c5ef94772922be93b-brick_c6935061d1c9dbb05f316e3d87080a38-brick.log --xlator-option *-posix.glusterd-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 --brick-port 49153 --xlator-option vol_bf31af76ef6c54e7e8f24f4d8711cb22-server.listen-port=49153 > ââ22654 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ec468453b11caed5259f5e1abcc65b44.socket --xlator-option *replicate*.node-uuid=1a05c9a0-f9cc-4055-8f22-61c6dd314c94 > >Jun 08 07:25:17 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Starting GlusterFS, a clustered file-system server... >Jun 08 07:25:20 dhcp47-76.lab.eng.blr.redhat.com systemd[1]: Started GlusterFS, a clustered file-system server. >Jun 08 08:53:58 dhcp47-76.lab.eng.blr.redhat.com glusterd[830]: [2018-06-08 08:53:58.338441] C [rpc-clnt.c:465:rpc_clnt_fill_request_info] 0-management: cannot lookup the saved frame corresponding to xid (103) >[heketi] INFO 2018/06/08 09:53:14 Periodic health check status: node d942fe6c0ee5691b7cc263968f97b650 up=true >[heketi] INFO 2018/06/08 09:53:14 Cleaned 0 nodes from health cache >[negroni] Started DELETE /volumes/5db55c483f6984d9917cfe4b3c8b3cbc >[negroni] Started DELETE /volumes/9c85de66c12db0a72b4d16fe888ff74d >[negroni] Completed 202 Accepted in 19.305117ms >[asynchttp] INFO 2018/06/08 09:53:35 asynchttp.go:288: Started job 59c48f7b67bd8d3cccd84a55c1cb7ee8 >[heketi] INFO 2018/06/08 09:53:35 Started async operation: Delete Volume >[negroni] Started GET /queue/59c48f7b67bd8d3cccd84a55c1cb7ee8 >[negroni] Completed 200 OK in 151.492µs >[negroni] Completed 202 Accepted in 34.138647ms >[asynchttp] INFO 2018/06/08 09:53:36 asynchttp.go:288: Started job dbbf880df6955eb8f03eb50ffd8e699a >[heketi] INFO 2018/06/08 09:53:36 Started async operation: Delete Volume >[negroni] Started GET /queue/dbbf880df6955eb8f03eb50ffd8e699a >[negroni] Completed 200 OK in 169.592µs >[kubeexec] DEBUG 2018/06/08 09:53:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-122.lab.eng.blr.redhat.com Pod: glusterfs-storage-pg4xc Command: gluster --mode=script snapshot list vol_5db55c483f6984d9917cfe4b3c8b3cbc --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_5db55c483f6984d9917cfe4b3c8b3cbc) does not exist</opErrstr> ></cliOutput> >[kubeexec] DEBUG 2018/06/08 09:53:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-76.lab.eng.blr.redhat.com Pod: glusterfs-storage-gxp7c Command: gluster --mode=script snapshot list vol_9c85de66c12db0a72b4d16fe888ff74d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>-1</opRet> > <opErrno>30806</opErrno> > <opErrstr>Volume (vol_9c85de66c12db0a72b4d16fe888ff74d) does not exist</opErrstr> ></cliOutput> >[kubeexec] ERROR 2018/06/08 09:53:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_5db55c483f6984d9917cfe4b3c8b3cbc force] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:53:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume stop: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:53:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop vol_9c85de66c12db0a72b4d16fe888ff74d force] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:53:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:144: Unable to stop volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume stop: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[kubeexec] ERROR 2018/06/08 09:53:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_5db55c483f6984d9917cfe4b3c8b3cbc] on glusterfs-storage-pg4xc: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >] >[cmdexec] ERROR 2018/06/08 09:53:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:53:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[heketi] ERROR 2018/06/08 09:53:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[kubeexec] ERROR 2018/06/08 09:53:36 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete vol_9c85de66c12db0a72b4d16fe888ff74d] on glusterfs-storage-gxp7c: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >] >[cmdexec] ERROR 2018/06/08 09:53:36 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:153: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:53:36 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:525: Unable to delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:53:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:429: Error executing delete volume: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[heketi] ERROR 2018/06/08 09:53:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_5db55c483f6984d9917cfe4b3c8b3cbc: Unable to execute command on glusterfs-storage-pg4xc: volume delete: vol_5db55c483f6984d9917cfe4b3c8b3cbc: failed: Volume vol_5db55c483f6984d9917cfe4b3c8b3cbc does not exist >[asynchttp] INFO 2018/06/08 09:53:36 asynchttp.go:292: Completed job 59c48f7b67bd8d3cccd84a55c1cb7ee8 in 796.079381ms >[heketi] ERROR 2018/06/08 09:53:36 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:1185: Delete Volume Failed: Unable to delete volume vol_9c85de66c12db0a72b4d16fe888ff74d: Unable to execute command on glusterfs-storage-gxp7c: volume delete: vol_9c85de66c12db0a72b4d16fe888ff74d: failed: Volume vol_9c85de66c12db0a72b4d16fe888ff74d does not exist >[asynchttp] INFO 2018/06/08 09:53:36 asynchttp.go:292: Completed job dbbf880df6955eb8f03eb50ffd8e699a in 793.577264ms >[negroni] Started GET /queue/59c48f7b67bd8d3cccd84a55c1cb7ee8 >[negroni] Completed 500 Internal Server Error in 154.797µs >[negroni] Started GET /queue/dbbf880df6955eb8f03eb50ffd8e699a >[negroni] Completed 500 Internal Server Error in 168.215µs
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1589070
: 1449068