Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1476487 Details for
Bug 1612782
creation of block volume is failing, but device storage is getting used
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
heketi_logs_after_blockvolumecreate_failed
hk.logs (text/plain), 34.46 KB, created by
krishnaram Karthick
on 2018-08-16 18:15:24 UTC
(
hide
)
Description:
heketi_logs_after_blockvolumecreate_failed
Filename:
MIME Type:
Creator:
krishnaram Karthick
Created:
2018-08-16 18:15:24 UTC
Size:
34.46 KB
patch
obsolete
>Heketi 7.0.0 >[heketi] INFO 2018/08/16 18:11:02 Loaded kubernetes executor >[heketi] INFO 2018/08/16 18:11:02 Block: Auto Create Block Hosting Volume set to true >[heketi] INFO 2018/08/16 18:11:02 Block: New Block Hosting Volume size 100 GB >[heketi] INFO 2018/08/16 18:11:02 GlusterFS Application Loaded >[heketi] INFO 2018/08/16 18:11:02 Started Node Health Cache Monitor >Authorization loaded >Listening on port 8080 >[heketi] INFO 2018/08/16 18:11:12 Starting Node Health Status refresh >[cmdexec] INFO 2018/08/16 18:11:12 Check Glusterd service status in node dhcp47-183.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/08/16 18:11:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Thu 2018-08-16 17:06:08 UTC; 1h 5min ago > Process: 475 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 478 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda555347f_a176_11e8_9e7f_005056a51f55.slice/docker-b00ddadd0c9b0c132f31c40db0db4f671339b8389f000e5dfaa10df22cef6311.scope/system.slice/glusterd.service > ââ 478 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 937 /usr/sbin/glusterfsd -s 10.70.47.183 --volfile-id heketidbstorage.10.70.47.183.var-lib-heketi-mounts-vg_f89b9b3b7340e500f2c6367273182b28-brick_f4cc9aa56c8efb4c60285d771bd5fca9-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.183-var-lib-heketi-mounts-vg_f89b9b3b7340e500f2c6367273182b28-brick_f4cc9aa56c8efb4c60285d771bd5fca9-brick.pid -S /var/run/gluster/5ad64fd5618d695a3c7e209b6a93a8a7.socket --brick-name /var/lib/heketi/mounts/vg_f89b9b3b7340e500f2c6367273182b28/brick_f4cc9aa56c8efb4c60285d771bd5fca9/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_f89b9b3b7340e500f2c6367273182b28-brick_f4cc9aa56c8efb4c60285d771bd5fca9-brick.log --xlator-option *-posix.glusterd-uuid=27d4591d-5f1e-4861-b5e4-15f5da964895 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ3882 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/150b377103f5f726f777a10d43c66a88.socket --xlator-option *replicate*.node-uuid=27d4591d-5f1e-4861-b5e4-15f5da964895 >[heketi] INFO 2018/08/16 18:11:13 Periodic health check status: node 5f0be17866654828d51d78022320f1f8 up=true >[cmdexec] INFO 2018/08/16 18:11:13 Check Glusterd service status in node dhcp46-152.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/08/16 18:11:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-152.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh6t Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Thu 2018-08-16 17:06:08 UTC; 1h 5min ago > Process: 478 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 479 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5571c09_a176_11e8_9e7f_005056a51f55.slice/docker-3f6eb7e322354939c5b418fcff80477089327717190c6339a52091463a4fb665.scope/system.slice/glusterd.service > ââ 479 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 884 /usr/sbin/glusterfsd -s 10.70.46.152 --volfile-id heketidbstorage.10.70.46.152.var-lib-heketi-mounts-vg_af2c81ded89e5ffa84b759f5fc717bc7-brick_a2837129de8dc24c6590ce680c7453a2-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.152-var-lib-heketi-mounts-vg_af2c81ded89e5ffa84b759f5fc717bc7-brick_a2837129de8dc24c6590ce680c7453a2-brick.pid -S /var/run/gluster/89e7191ff4070aeee75c43b2d03a55f4.socket --brick-name /var/lib/heketi/mounts/vg_af2c81ded89e5ffa84b759f5fc717bc7/brick_a2837129de8dc24c6590ce680c7453a2/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_af2c81ded89e5ffa84b759f5fc717bc7-brick_a2837129de8dc24c6590ce680c7453a2-brick.log --xlator-option *-posix.glusterd-uuid=c501c122-9937-4b4c-95cc-12e5fb3d3975 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ3730 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/d20abc74d8025a94ddb90d11e4c58a05.socket --xlator-option *replicate*.node-uuid=c501c122-9937-4b4c-95cc-12e5fb3d3975 >[heketi] INFO 2018/08/16 18:11:13 Periodic health check status: node 6120e5eb7b35b19b1389d33ad1cf9991 up=true >[cmdexec] INFO 2018/08/16 18:11:13 Check Glusterd service status in node dhcp47-54.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/08/16 18:11:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Thu 2018-08-16 17:06:08 UTC; 1h 5min ago > Process: 477 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 478 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda557649a_a176_11e8_9e7f_005056a51f55.slice/docker-f8e8d0175314e51cfe78a773355617795a07ae00dce60d376257007aeca4810b.scope/system.slice/glusterd.service > ââ 478 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 864 /usr/sbin/glusterfsd -s 10.70.47.54 --volfile-id heketidbstorage.10.70.47.54.var-lib-heketi-mounts-vg_3fecdb9fc84903807eef185ca1057930-brick_d214f841f456423a147bb31fb225fd78-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.54-var-lib-heketi-mounts-vg_3fecdb9fc84903807eef185ca1057930-brick_d214f841f456423a147bb31fb225fd78-brick.pid -S /var/run/gluster/f68a7b751cdeb40d644979a12160a720.socket --brick-name /var/lib/heketi/mounts/vg_3fecdb9fc84903807eef185ca1057930/brick_d214f841f456423a147bb31fb225fd78/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_3fecdb9fc84903807eef185ca1057930-brick_d214f841f456423a147bb31fb225fd78-brick.log --xlator-option *-posix.glusterd-uuid=ee49d270-8512-49a0-b4a6-601b3a403f5d --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ3793 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/5354b44c224ed2c52d98bfffd91b4407.socket --xlator-option *replicate*.node-uuid=ee49d270-8512-49a0-b4a6-601b3a403f5d >[heketi] INFO 2018/08/16 18:11:13 Periodic health check status: node 9889f77e242b9e5b2edc1f07ea800b87 up=true >[heketi] INFO 2018/08/16 18:11:13 Cleaned 0 nodes from health cache >[negroni] Started POST /blockvolumes >[heketi] INFO 2018/08/16 18:11:36 Allocating brick set #0 >[negroni] Completed 202 Accepted in 15.282629ms >[asynchttp] INFO 2018/08/16 18:11:36 asynchttp.go:288: Started job d51034cb5ff35e9d5f58a1e8ce8d6444 >[heketi] INFO 2018/08/16 18:11:36 Started async operation: Create Block Volume >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 200 OK in 99.844µs >[heketi] INFO 2018/08/16 18:11:36 Creating brick 2026c337eb46b3dec4391cb9f6f75031 >[heketi] INFO 2018/08/16 18:11:36 Creating brick ee08783e58ff82264d79071bc7892586 >[heketi] INFO 2018/08/16 18:11:36 Creating brick 13493ad3ccc3b2f3486d0f0b8e43cda1 >[kubeexec] DEBUG 2018/08/16 18:11:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: mkdir -p /var/lib/heketi/mounts/vg_3fecdb9fc84903807eef185ca1057930/brick_2026c337eb46b3dec4391cb9f6f75031 >Result: >[kubeexec] DEBUG 2018/08/16 18:11:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: mkdir -p /var/lib/heketi/mounts/vg_f89b9b3b7340e500f2c6367273182b28/brick_ee08783e58ff82264d79071bc7892586 >Result: >[kubeexec] DEBUG 2018/08/16 18:11:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-152.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh6t Command: mkdir -p /var/lib/heketi/mounts/vg_af2c81ded89e5ffa84b759f5fc717bc7/brick_13493ad3ccc3b2f3486d0f0b8e43cda1 >Result: >[kubeexec] DEBUG 2018/08/16 18:11:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-152.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh6t Command: lvcreate --autobackup=n --poolmetadatasize 524288K --chunksize 256K --size 104857600K --thin vg_af2c81ded89e5ffa84b759f5fc717bc7/tp_13493ad3ccc3b2f3486d0f0b8e43cda1 --virtualsize 104857600K --name brick_13493ad3ccc3b2f3486d0f0b8e43cda1 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_13493ad3ccc3b2f3486d0f0b8e43cda1" created. >[kubeexec] DEBUG 2018/08/16 18:11:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: lvcreate --autobackup=n --poolmetadatasize 524288K --chunksize 256K --size 104857600K --thin vg_f89b9b3b7340e500f2c6367273182b28/tp_ee08783e58ff82264d79071bc7892586 --virtualsize 104857600K --name brick_ee08783e58ff82264d79071bc7892586 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_ee08783e58ff82264d79071bc7892586" created. >[kubeexec] DEBUG 2018/08/16 18:11:37 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: lvcreate --autobackup=n --poolmetadatasize 524288K --chunksize 256K --size 104857600K --thin vg_3fecdb9fc84903807eef185ca1057930/tp_2f2347e81557f8d4e1de9899a545fb13 --virtualsize 104857600K --name brick_2026c337eb46b3dec4391cb9f6f75031 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_2026c337eb46b3dec4391cb9f6f75031" created. >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 200 OK in 154.837µs >[kubeexec] DEBUG 2018/08/16 18:11:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_f89b9b3b7340e500f2c6367273182b28-brick_ee08783e58ff82264d79071bc7892586 >Result: meta-data=/dev/mapper/vg_f89b9b3b7340e500f2c6367273182b28-brick_ee08783e58ff82264d79071bc7892586 isize=512 agcount=16, agsize=1638400 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=26214400, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=12800, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/08/16 18:11:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_3fecdb9fc84903807eef185ca1057930-brick_2026c337eb46b3dec4391cb9f6f75031 >Result: meta-data=/dev/mapper/vg_3fecdb9fc84903807eef185ca1057930-brick_2026c337eb46b3dec4391cb9f6f75031 isize=512 agcount=16, agsize=1638400 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=26214400, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=12800, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/08/16 18:11:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-152.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh6t Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_af2c81ded89e5ffa84b759f5fc717bc7-brick_13493ad3ccc3b2f3486d0f0b8e43cda1 >Result: meta-data=/dev/mapper/vg_af2c81ded89e5ffa84b759f5fc717bc7-brick_13493ad3ccc3b2f3486d0f0b8e43cda1 isize=512 agcount=16, agsize=1638400 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=26214400, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=12800, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/08/16 18:11:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: awk "BEGIN {print \"/dev/mapper/vg_f89b9b3b7340e500f2c6367273182b28-brick_ee08783e58ff82264d79071bc7892586 /var/lib/heketi/mounts/vg_f89b9b3b7340e500f2c6367273182b28/brick_ee08783e58ff82264d79071bc7892586 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/08/16 18:11:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: awk "BEGIN {print \"/dev/mapper/vg_3fecdb9fc84903807eef185ca1057930-brick_2026c337eb46b3dec4391cb9f6f75031 /var/lib/heketi/mounts/vg_3fecdb9fc84903807eef185ca1057930/brick_2026c337eb46b3dec4391cb9f6f75031 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/08/16 18:11:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-152.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh6t Command: awk "BEGIN {print \"/dev/mapper/vg_af2c81ded89e5ffa84b759f5fc717bc7-brick_13493ad3ccc3b2f3486d0f0b8e43cda1 /var/lib/heketi/mounts/vg_af2c81ded89e5ffa84b759f5fc717bc7/brick_13493ad3ccc3b2f3486d0f0b8e43cda1 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/08/16 18:11:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_f89b9b3b7340e500f2c6367273182b28-brick_ee08783e58ff82264d79071bc7892586 /var/lib/heketi/mounts/vg_f89b9b3b7340e500f2c6367273182b28/brick_ee08783e58ff82264d79071bc7892586 >Result: >[kubeexec] DEBUG 2018/08/16 18:11:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_3fecdb9fc84903807eef185ca1057930-brick_2026c337eb46b3dec4391cb9f6f75031 /var/lib/heketi/mounts/vg_3fecdb9fc84903807eef185ca1057930/brick_2026c337eb46b3dec4391cb9f6f75031 >Result: >[kubeexec] DEBUG 2018/08/16 18:11:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-152.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh6t Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_af2c81ded89e5ffa84b759f5fc717bc7-brick_13493ad3ccc3b2f3486d0f0b8e43cda1 /var/lib/heketi/mounts/vg_af2c81ded89e5ffa84b759f5fc717bc7/brick_13493ad3ccc3b2f3486d0f0b8e43cda1 >Result: >[kubeexec] DEBUG 2018/08/16 18:11:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: mkdir /var/lib/heketi/mounts/vg_f89b9b3b7340e500f2c6367273182b28/brick_ee08783e58ff82264d79071bc7892586/brick >Result: >[kubeexec] DEBUG 2018/08/16 18:11:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: mkdir /var/lib/heketi/mounts/vg_3fecdb9fc84903807eef185ca1057930/brick_2026c337eb46b3dec4391cb9f6f75031/brick >Result: >[kubeexec] DEBUG 2018/08/16 18:11:38 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-152.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh6t Command: mkdir /var/lib/heketi/mounts/vg_af2c81ded89e5ffa84b759f5fc717bc7/brick_13493ad3ccc3b2f3486d0f0b8e43cda1/brick >Result: >[cmdexec] INFO 2018/08/16 18:11:38 Creating volume vol_1798b5e4249cf3f68a1de902c34238a1 replica 3 >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 200 OK in 150.939µs >[kubeexec] DEBUG 2018/08/16 18:11:39 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: gluster --mode=script volume create vol_1798b5e4249cf3f68a1de902c34238a1 replica 3 10.70.47.54:/var/lib/heketi/mounts/vg_3fecdb9fc84903807eef185ca1057930/brick_2026c337eb46b3dec4391cb9f6f75031/brick 10.70.46.152:/var/lib/heketi/mounts/vg_af2c81ded89e5ffa84b759f5fc717bc7/brick_13493ad3ccc3b2f3486d0f0b8e43cda1/brick 10.70.47.183:/var/lib/heketi/mounts/vg_f89b9b3b7340e500f2c6367273182b28/brick_ee08783e58ff82264d79071bc7892586/brick >Result: volume create: vol_1798b5e4249cf3f68a1de902c34238a1: success: please start the volume to access data >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 200 OK in 292.971µs >[kubeexec] DEBUG 2018/08/16 18:11:40 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: gluster --mode=script volume set vol_1798b5e4249cf3f68a1de902c34238a1 group gluster-block >Result: volume set: success >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 200 OK in 264.002µs >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 200 OK in 280.105µs >[kubeexec] DEBUG 2018/08/16 18:11:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: gluster --mode=script volume start vol_1798b5e4249cf3f68a1de902c34238a1 >Result: volume start: vol_1798b5e4249cf3f68a1de902c34238a1: success >[cmdexec] INFO 2018/08/16 18:11:42 Check Glusterd service status in node dhcp47-183.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/08/16 18:11:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Thu 2018-08-16 17:06:08 UTC; 1h 5min ago > Process: 475 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 478 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda555347f_a176_11e8_9e7f_005056a51f55.slice/docker-b00ddadd0c9b0c132f31c40db0db4f671339b8389f000e5dfaa10df22cef6311.scope/system.slice/glusterd.service > ââ 478 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 937 /usr/sbin/glusterfsd -s 10.70.47.183 --volfile-id heketidbstorage.10.70.47.183.var-lib-heketi-mounts-vg_f89b9b3b7340e500f2c6367273182b28-brick_f4cc9aa56c8efb4c60285d771bd5fca9-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.183-var-lib-heketi-mounts-vg_f89b9b3b7340e500f2c6367273182b28-brick_f4cc9aa56c8efb4c60285d771bd5fca9-brick.pid -S /var/run/gluster/5ad64fd5618d695a3c7e209b6a93a8a7.socket --brick-name /var/lib/heketi/mounts/vg_f89b9b3b7340e500f2c6367273182b28/brick_f4cc9aa56c8efb4c60285d771bd5fca9/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_f89b9b3b7340e500f2c6367273182b28-brick_f4cc9aa56c8efb4c60285d771bd5fca9-brick.log --xlator-option *-posix.glusterd-uuid=27d4591d-5f1e-4861-b5e4-15f5da964895 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ4113 /usr/sbin/glusterfsd -s 10.70.47.183 --volfile-id vol_1798b5e4249cf3f68a1de902c34238a1.10.70.47.183.var-lib-heketi-mounts-vg_f89b9b3b7340e500f2c6367273182b28-brick_ee08783e58ff82264d79071bc7892586-brick -p /var/run/gluster/vols/vol_1798b5e4249cf3f68a1de902c34238a1/10.70.47.183-var-lib-heketi-mounts-vg_f89b9b3b7340e500f2c6367273182b28-brick_ee08783e58ff82264d79071bc7892586-brick.pid -S /var/run/gluster/fd0da672ce59568969affdfa3ba5c588.socket --brick-name /var/lib/heketi/mounts/vg_f89b9b3b7340e500f2c6367273182b28/brick_ee08783e58ff82264d79071bc7892586/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_f89b9b3b7340e500f2c6367273182b28-brick_ee08783e58ff82264d79071bc7892586-brick.log --xlator-option *-posix.glusterd-uuid=27d4591d-5f1e-4861-b5e4-15f5da964895 --brick-port 49153 --xlator-option vol_1798b5e4249cf3f68a1de902c34238a1-server.listen-port=49153 > ââ4134 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/150b377103f5f726f777a10d43c66a88.socket --xlator-option *replicate*.node-uuid=27d4591d-5f1e-4861-b5e4-15f5da964895 >[kubeexec] ERROR 2018/08/16 18:11:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster-block create vol_1798b5e4249cf3f68a1de902c34238a1/blockvol_eff0e476d3ab689b7aa3319ff430279a ha 3 auth disable prealloc full 10.70.47.183,10.70.46.152,10.70.47.54 1GiB --json] on glusterfs-storage-nr58s: Err[command terminated with exit code 255]: Stdout [Connection failed. Please check if gluster-block daemon is operational. >]: Stderr [] >[kubeexec] ERROR 2018/08/16 18:11:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster-block delete vol_1798b5e4249cf3f68a1de902c34238a1/blockvol_eff0e476d3ab689b7aa3319ff430279a --json] on glusterfs-storage-nr58s: Err[command terminated with exit code 255]: Stdout [Connection failed. Please check if gluster-block daemon is operational. >]: Stderr [] >[cmdexec] ERROR 2018/08/16 18:11:42 /src/github.com/heketi/heketi/executors/cmdexec/block_volume.go:102: Unable to delete volume blockvol_eff0e476d3ab689b7aa3319ff430279a: Unable to execute command on glusterfs-storage-nr58s: >[heketi] ERROR 2018/08/16 18:11:42 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:828: Error executing create block volume: Unable to execute command on glusterfs-storage-nr58s: >[cmdexec] INFO 2018/08/16 18:11:42 Check Glusterd service status in node dhcp47-183.lab.eng.blr.redhat.com >[kubeexec] DEBUG 2018/08/16 18:11:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Thu 2018-08-16 17:06:08 UTC; 1h 5min ago > Process: 475 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 478 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda555347f_a176_11e8_9e7f_005056a51f55.slice/docker-b00ddadd0c9b0c132f31c40db0db4f671339b8389f000e5dfaa10df22cef6311.scope/system.slice/glusterd.service > ââ 478 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 937 /usr/sbin/glusterfsd -s 10.70.47.183 --volfile-id heketidbstorage.10.70.47.183.var-lib-heketi-mounts-vg_f89b9b3b7340e500f2c6367273182b28-brick_f4cc9aa56c8efb4c60285d771bd5fca9-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.183-var-lib-heketi-mounts-vg_f89b9b3b7340e500f2c6367273182b28-brick_f4cc9aa56c8efb4c60285d771bd5fca9-brick.pid -S /var/run/gluster/5ad64fd5618d695a3c7e209b6a93a8a7.socket --brick-name /var/lib/heketi/mounts/vg_f89b9b3b7340e500f2c6367273182b28/brick_f4cc9aa56c8efb4c60285d771bd5fca9/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_f89b9b3b7340e500f2c6367273182b28-brick_f4cc9aa56c8efb4c60285d771bd5fca9-brick.log --xlator-option *-posix.glusterd-uuid=27d4591d-5f1e-4861-b5e4-15f5da964895 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ4113 /usr/sbin/glusterfsd -s 10.70.47.183 --volfile-id vol_1798b5e4249cf3f68a1de902c34238a1.10.70.47.183.var-lib-heketi-mounts-vg_f89b9b3b7340e500f2c6367273182b28-brick_ee08783e58ff82264d79071bc7892586-brick -p /var/run/gluster/vols/vol_1798b5e4249cf3f68a1de902c34238a1/10.70.47.183-var-lib-heketi-mounts-vg_f89b9b3b7340e500f2c6367273182b28-brick_ee08783e58ff82264d79071bc7892586-brick.pid -S /var/run/gluster/fd0da672ce59568969affdfa3ba5c588.socket --brick-name /var/lib/heketi/mounts/vg_f89b9b3b7340e500f2c6367273182b28/brick_ee08783e58ff82264d79071bc7892586/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_f89b9b3b7340e500f2c6367273182b28-brick_ee08783e58ff82264d79071bc7892586-brick.log --xlator-option *-posix.glusterd-uuid=27d4591d-5f1e-4861-b5e4-15f5da964895 --brick-port 49153 --xlator-option vol_1798b5e4249cf3f68a1de902c34238a1-server.listen-port=49153 > ââ4134 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/150b377103f5f726f777a10d43c66a88.socket --xlator-option *replicate*.node-uuid=27d4591d-5f1e-4861-b5e4-15f5da964895 >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 200 OK in 148.766µs >[kubeexec] ERROR 2018/08/16 18:11:42 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster-block delete vol_1798b5e4249cf3f68a1de902c34238a1/blockvol_eff0e476d3ab689b7aa3319ff430279a --json] on glusterfs-storage-nr58s: Err[command terminated with exit code 255]: Stdout [Connection failed. Please check if gluster-block daemon is operational. >]: Stderr [] >[cmdexec] ERROR 2018/08/16 18:11:42 /src/github.com/heketi/heketi/executors/cmdexec/block_volume.go:102: Unable to delete volume blockvol_eff0e476d3ab689b7aa3319ff430279a: Unable to execute command on glusterfs-storage-nr58s: >[heketi] ERROR 2018/08/16 18:11:42 /src/github.com/heketi/heketi/apps/glusterfs/block_volume_entry.go:315: Unable to delete volume: Unable to execute command on glusterfs-storage-nr58s: >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 200 OK in 245.279µs >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 200 OK in 251.713µs >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 200 OK in 153.561µs >[kubeexec] DEBUG 2018/08/16 18:11:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: gluster --mode=script volume stop vol_1798b5e4249cf3f68a1de902c34238a1 force >Result: volume stop: vol_1798b5e4249cf3f68a1de902c34238a1: success >[kubeexec] DEBUG 2018/08/16 18:11:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: gluster --mode=script volume delete vol_1798b5e4249cf3f68a1de902c34238a1 >Result: volume delete: vol_1798b5e4249cf3f68a1de902c34238a1: success >[heketi] INFO 2018/08/16 18:11:46 Deleting brick 13493ad3ccc3b2f3486d0f0b8e43cda1 >[heketi] INFO 2018/08/16 18:11:46 Deleting brick ee08783e58ff82264d79071bc7892586 >[heketi] INFO 2018/08/16 18:11:46 Deleting brick 2026c337eb46b3dec4391cb9f6f75031 >[kubeexec] DEBUG 2018/08/16 18:11:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: umount /var/lib/heketi/mounts/vg_f89b9b3b7340e500f2c6367273182b28/brick_ee08783e58ff82264d79071bc7892586 >Result: >[kubeexec] DEBUG 2018/08/16 18:11:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: umount /var/lib/heketi/mounts/vg_3fecdb9fc84903807eef185ca1057930/brick_2026c337eb46b3dec4391cb9f6f75031 >Result: >[kubeexec] DEBUG 2018/08/16 18:11:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-152.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh6t Command: umount /var/lib/heketi/mounts/vg_af2c81ded89e5ffa84b759f5fc717bc7/brick_13493ad3ccc3b2f3486d0f0b8e43cda1 >Result: >[kubeexec] DEBUG 2018/08/16 18:11:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: sed -i.save "/brick_ee08783e58ff82264d79071bc7892586/d" /var/lib/heketi/fstab >Result: >[kubeexec] DEBUG 2018/08/16 18:11:46 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: sed -i.save "/brick_2026c337eb46b3dec4391cb9f6f75031/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 200 OK in 240.398µs >[kubeexec] DEBUG 2018/08/16 18:11:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-152.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh6t Command: sed -i.save "/brick_13493ad3ccc3b2f3486d0f0b8e43cda1/d" /var/lib/heketi/fstab >Result: >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 200 OK in 140.889µs >[kubeexec] DEBUG 2018/08/16 18:11:47 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: lvremove --autobackup=n -f vg_f89b9b3b7340e500f2c6367273182b28/brick_ee08783e58ff82264d79071bc7892586 >Result: Logical volume "brick_ee08783e58ff82264d79071bc7892586" successfully removed >[kubeexec] DEBUG 2018/08/16 18:11:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: lvremove --autobackup=n -f vg_3fecdb9fc84903807eef185ca1057930/brick_2026c337eb46b3dec4391cb9f6f75031 >Result: Logical volume "brick_2026c337eb46b3dec4391cb9f6f75031" successfully removed >[kubeexec] DEBUG 2018/08/16 18:11:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-152.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh6t Command: lvremove --autobackup=n -f vg_af2c81ded89e5ffa84b759f5fc717bc7/brick_13493ad3ccc3b2f3486d0f0b8e43cda1 >Result: Logical volume "brick_13493ad3ccc3b2f3486d0f0b8e43cda1" successfully removed >[kubeexec] DEBUG 2018/08/16 18:11:48 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: lvs --noheadings --options=thin_count vg_f89b9b3b7340e500f2c6367273182b28/tp_ee08783e58ff82264d79071bc7892586 >Result: 0 >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 200 OK in 290.312µs >[kubeexec] DEBUG 2018/08/16 18:11:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: lvs --noheadings --options=thin_count vg_3fecdb9fc84903807eef185ca1057930/tp_2f2347e81557f8d4e1de9899a545fb13 >Result: 0 >[kubeexec] DEBUG 2018/08/16 18:11:49 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-152.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh6t Command: lvs --noheadings --options=thin_count vg_af2c81ded89e5ffa84b759f5fc717bc7/tp_13493ad3ccc3b2f3486d0f0b8e43cda1 >Result: 0 >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 200 OK in 158.084µs >[kubeexec] DEBUG 2018/08/16 18:11:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: lvremove --autobackup=n -f vg_f89b9b3b7340e500f2c6367273182b28/tp_ee08783e58ff82264d79071bc7892586 >Result: Logical volume "tp_ee08783e58ff82264d79071bc7892586" successfully removed >[kubeexec] DEBUG 2018/08/16 18:11:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: lvremove --autobackup=n -f vg_3fecdb9fc84903807eef185ca1057930/tp_2f2347e81557f8d4e1de9899a545fb13 >Result: Logical volume "tp_2f2347e81557f8d4e1de9899a545fb13" successfully removed >[kubeexec] DEBUG 2018/08/16 18:11:50 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-152.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh6t Command: lvremove --autobackup=n -f vg_af2c81ded89e5ffa84b759f5fc717bc7/tp_13493ad3ccc3b2f3486d0f0b8e43cda1 >Result: Logical volume "tp_13493ad3ccc3b2f3486d0f0b8e43cda1" successfully removed >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 200 OK in 213.613µs >[kubeexec] DEBUG 2018/08/16 18:11:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-183.lab.eng.blr.redhat.com Pod: glusterfs-storage-nr58s Command: rmdir /var/lib/heketi/mounts/vg_f89b9b3b7340e500f2c6367273182b28/brick_ee08783e58ff82264d79071bc7892586 >Result: >[kubeexec] DEBUG 2018/08/16 18:11:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp47-54.lab.eng.blr.redhat.com Pod: glusterfs-storage-strmr Command: rmdir /var/lib/heketi/mounts/vg_3fecdb9fc84903807eef185ca1057930/brick_2026c337eb46b3dec4391cb9f6f75031 >Result: >[kubeexec] DEBUG 2018/08/16 18:11:51 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: dhcp46-152.lab.eng.blr.redhat.com Pod: glusterfs-storage-vsh6t Command: rmdir /var/lib/heketi/mounts/vg_af2c81ded89e5ffa84b759f5fc717bc7/brick_13493ad3ccc3b2f3486d0f0b8e43cda1 >Result: >[heketi] ERROR 2018/08/16 18:11:51 /src/github.com/heketi/heketi/apps/glusterfs/operations_manage.go:113: Create Block Volume Failed: Unable to execute command on glusterfs-storage-nr58s: >[asynchttp] INFO 2018/08/16 18:11:51 asynchttp.go:292: Completed job d51034cb5ff35e9d5f58a1e8ce8d6444 in 14.939129884s >[negroni] Started GET /queue/d51034cb5ff35e9d5f58a1e8ce8d6444 >[negroni] Completed 500 Internal Server Error in 166.013µs
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1612782
: 1476487