Bug 1477040 - ha count from Cli is not treated correctly in heketi block API
ha count from Cli is not treated correctly in heketi block API
Status: ON_QA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: heketi (Show other bugs)
cns-3.6
Unspecified Unspecified
unspecified Severity unspecified
: ---
: CNS 3.6
Assigned To: Michael Adam
krishnaram Karthick
:
Depends On:
Blocks: 1445448
  Show dependency treegraph
 
Reported: 2017-08-01 02:17 EDT by Humble Chirammal
Modified: 2017-08-07 14:45 EDT (History)
5 users (show)

See Also:
Fixed In Version: heketi-5.0.0-7 rhgs-volmanager-docker-5.0.0-9
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Humble Chirammal 2017-08-01 02:17:58 EDT
Description of problem:
[root@master ~]# heketi-cli blockvolume create --size=1 ha=2
Name: blockvol_a6874e1c5d8467c2733b1a02b491e61e
Size: 1
Volume Id: a6874e1c5d8467c2733b1a02b491e61e
Cluster Id: c35f18bf77b5c8aaa8576d2c95e0ba8c
Hosts: [192.168.35.3 192.168.35.4 192.168.35.5 192.168.35.6]
IQN: iqn.2016-12.org.gluster-block:5edac56e-d4ab-471c-8e3b-42f3b110c01a
LUN: 0
Hacount: 4
Username: 
Password: 
Block Hosting Volume: 96d8648ebd2ff7dcb87be3a2587c6246


[root@master ~]# oc logs heketi-1-22x5g
Heketi 5.0.0
[heketi] INFO 2017/07/27 06:53:44 Loaded kubernetes executor
[heketi] INFO 2017/07/27 06:53:44 Block: Auto Create Block Hosting Volume set to true
[heketi] INFO 2017/07/27 06:53:44 Block: New Block Hosting Volume size 500 GB
[heketi] INFO 2017/07/27 06:53:44 Loaded simple allocator
[heketi] INFO 2017/07/27 06:53:44 GlusterFS Application Loaded
Listening on port 8080
............
[negroni] Started GET /queue/3f9ac0b3ecf8ea8c40715222a04b50bc
[negroni] Completed 200 OK in 106.279µs
[negroni] Started GET /queue/3f9ac0b3ecf8ea8c40715222a04b50bc
[negroni] Completed 200 OK in 66.721µs
[negroni] Started GET /queue/3f9ac0b3ecf8ea8c40715222a04b50bc
[negroni] Completed 200 OK in 60.938µs
[negroni] Started GET /queue/3f9ac0b3ecf8ea8c40715222a04b50bc
[negroni] Completed 200 OK in 73.064µs
[negroni] Started GET /queue/3f9ac0b3ecf8ea8c40715222a04b50bc
[negroni] Completed 200 OK in 65.684µs
[kubeexec] DEBUG 2017/08/01 06:08:04 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:250: Host: 192.168.35.3 Pod: glusterfs-6qz73 Command: gluster-block create vol_96d8648ebd2ff7dcb87be3a2587c6246/blockvol_a6874e1c5d8467c2733b1a02b491e61e  ha 4 auth disable  192.168.35.3,192.168.35.4,192.168.35.5,192.168.35.6 1G --json
Result: { "IQN": "iqn.2016-12.org.gluster-block:5edac56e-d4ab-471c-8e3b-42f3b110c01a", "PORTAL(S)": [ "192.168.35.3:3260", "192.168.35.4:3260", "192.168.35.5:3260", "192.168.35.6:3260" ], "RESULT": "SUCCESS" }
[heketi] INFO 2017/08/01 06:08:04 Created block volume a6874e1c5d8467c2733b1a02b491e61e
[asynchttp] INFO 2017/08/01 06:08:04 asynchttp.go:129: Completed job 3f9ac0b3ecf8ea8c40715222a04b50bc in 7.540742687s
[negroni] Started GET /queue/3f9ac0b3ecf8ea8c40715222a04b50bc
[negroni] Completed 303 See Other in 89.578µs
[negroni] Started GET /blockvolumes/a6874e1c5d8467c2733b1a02b491e61e
[negroni] Completed 200 OK in 4.80209ms
[root@master ~]# 

[root@master ~]# heketi-cli --version
heketi-cli 5.0.0


[root@master ~]# oc describe pod heketi-1-22x5g|grep -i image
    Image:		rhgs3/rhgs-volmanager-rhel7:3.3.0-8
    Image ID:		docker-pullable://brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-volmanager-rhel7@sha256:bd486e30d983a7fcb8b6971fdebafb1131efc6732644002f3ebdadd11c8dd3de


[root@master ~]# rpm -qa |grep heketi
heketi-5.0.0-6.el7rhgs.x86_64
heketi-client-5.0.0-6.el7rhgs.x86_64
[root@master ~]# 



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Note You need to log in before you can comment on or make changes to this bug.