Bug 1471658

Summary: [HC] Hosted engine deployment should enable gfapi access for cluster
Product: [oVirt] ovirt-hosted-engine-setup Reporter: Sahina Bose <sabose>
Component: Plugins.GlusterAssignee: Denis Chaplygin <dchaplyg>
Status: CLOSED CURRENTRELEASE QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact:
Priority: high    
Version: ---CC: bugs, cshao, dchaplyg, dfediuck, knarra, sabose, stirabos, ylavi
Target Milestone: ovirt-4.1.7Keywords: FutureFeature
Target Release: 2.1.3Flags: rule-engine: ovirt-4.1?
rule-engine: planning_ack?
rule-engine: devel_ack+
rule-engine: testing_ack+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Feature: Enabling gfapi during HE installation Reason: For the HC deployment, we want the gfapi access to be enabled for the "Default" cluster during HE deployment. Result: You could use additional config file with: OVEHOSTED_ENGINE/enableLibgfapi=bool:True to enable libgfapi during HE setup
Story Points: ---
Clone Of:
: 1488991 (view as bug list) Environment:
Last Closed: 2017-11-13 12:27:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Gluster RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1477053    
Bug Blocks: 1493544    

Description Sahina Bose 2017-07-17 08:10:10 UTC
Description of problem:

The gfapi access at 4.1 is disabled by default. This is a setting at a cluster level, and enabled via the AdditionalFeature support at cluster level.

For the HC deployment, we want the gfapi access to be enabled for the "Default" cluster during HE deployment. 


Version-Release number of selected component (if applicable):
4.1

How reproducible:
NA


Additional info:
Could we have a configuration option that will set value for cluster in the database during HE deploy. If so, this option can be passed in the he-answers file from gdeploy.

Comment 1 Sandro Bonazzola 2017-07-17 12:39:58 UTC
Moving to gluster team

Comment 4 Denis Chaplygin 2017-09-12 15:12:57 UTC
You could use additional config file with:

OVEHOSTED_ENGINE/enableLibgfapi=bool:True

to enable libgfapi during HE setup

Comment 5 Sahina Bose 2017-10-11 08:30:44 UTC
Thanks. We will not be enabling this by default in HC deployment though as there are open bzs regarding additional host support with gfapi.

Comment 6 RamaKasturi 2017-11-02 16:46:40 UTC
Hi sahina / Denis,

    From this bug i understand that fix needs to be part of gdeploy cockpit plugin to incorporate the value which is put in comment 4 to enable gfapi access for default cluster during deployment.

   From comment 5 i understand that we have not done enabling this by default in HC deployment and for RHHI 1.1 this is not in scope for testing.

        
   Can you  please confirm what needs to be verified here ?Do we need to verify that this option can be passed to he-common.conf before doing HE deployment and if HE deployment passes ? If yes, can we have the summary changed for the bug ?

Thanks
kasturi

Comment 7 Denis Chaplygin 2017-11-03 09:23:34 UTC
My patch provides mechanism to enable libgfapi during HE deployment. You don't have to use it at all if you would like to have libgfapi disabled. 

As you mentioned, it must be supported by gdeploy too.

Comment 8 RamaKasturi 2017-11-03 09:31:32 UTC
(In reply to Denis Chaplygin from comment #7)
> My patch provides mechanism to enable libgfapi during HE deployment. You
> don't have to use it at all if you would like to have libgfapi disabled. 
> 
> As you mentioned, it must be supported by gdeploy too.

Denis, from QE side what is the verification you are looking for this bug ?

Comment 9 Yaniv Lavi 2017-11-05 11:10:09 UTC
(In reply to RamaKasturi from comment #8)
> (In reply to Denis Chaplygin from comment #7)
> > My patch provides mechanism to enable libgfapi during HE deployment. You
> > don't have to use it at all if you would like to have libgfapi disabled. 
> > 
> > As you mentioned, it must be supported by gdeploy too.
> 
> Denis, from QE side what is the verification you are looking for this bug ?

The test should be to check if you can enable libgfapi by default by sending parameter ENABLE_LIBGFAPI. The default should still be to have it disabled.

Comment 10 RamaKasturi 2017-11-06 10:23:29 UTC
Verified and works fine with build ovirt-hosted-engine-setup-2.1.4-1.el7ev.noarch.

Edited the file '/usr/share/cockpit/ovirt-dashboard/gdeploy-templates/he-common.conf' to have the value OVEHOSTED_ENGINE/enableLibgfapi=bool:True and started with HostedEngine deployment after gluster deployment was done.

I see that when deployed with above option in the he-common.conf file gfapi is enabled on the default cluster during HostedEngine deployment.

[root@hostedenginesm1 ~]# su - postgres
-bash-4.2$ psql -d engine
psql (9.2.23)
Type "help" for help.

engine=# select * from vdc_options where option_name='LibgfApiSupported' and version='4.1';
 option_id |    option_name    | option_value | version 
-----------+-------------------+--------------+---------
        91 | LibgfApiSupported | true         | 4.1
(1 row)


[root@hostedenginesm1 ~]# engine-config -g LibgfApiSupported
LibgfApiSupported: false version: 3.6
LibgfApiSupported: false version: 4.0
LibgfApiSupported: true version: 4.1

Added the additional hosts and created a vm. I see that disks of the newly created vms are accessed via gfapi.

[root@rhsqa-grafton1-nic2 ~]# ps aux | grep applinuxvm1
qemu     14628 29.2  0.8 5072968 2273672 ?     Sl   15:37   1:44 /usr/libexec/qemu-kvm -name guest=applinuxvm1,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-5-applinuxvm1/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX -m size=4194304k,slots=16,maxmem=16777216k -realtime mlock=off -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0-1,mem=4096 -uuid ceca0ca5-0d55-48a3-847e-02a5af386a9a -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=7.4-7.0.el7,serial=00000000-0000-0000-0000-0CC47A6F3388,uuid=ceca0ca5-0d55-48a3-847e-02a5af386a9a -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-5-applinuxvm1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2017-11-06T10:07:20,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=gluster://10.70.36.79/vmstore/c9268709-c4e4-4690-b300-c8c3782419e2/images/41c500a9-9460-4431-b493-9c09f2d11935/4c541743-cb8b-4b2f-af70-299ed0f6100e,file.debug=4,format=raw,if=none,id=drive-scsi0-0-0-0,serial=41c500a9-9460-4431-b493-9c09f2d11935,cache=none,werror=stop,rerror=stop,aio=threads -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0 -netdev tap,fd=32,id=hostnet0,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:51,bus=pci.0,addr=0x3,bootindex=1 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/ceca0ca5-0d55-48a3-847e-02a5af386a9a.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/ceca0ca5-0d55-48a3-847e-02a5af386a9a.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.70.45.1,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x7 -msg timestamp=on
root     17125  0.0  0.0 112660   972 pts/0    S+   15:43   0:00 grep --color=auto applinuxvm1

contents of /usr/share/cockpit/ovirt-dashboard/gdeploy-templates/he-common.conf file:
===============================================================
[root@rhsqa-grafton1-nic2 ~]# cat /usr/share/cockpit/ovirt-dashboard/gdeploy-templates/he-common.conf
[environment:default]
OVEHOSTED_NETWORK/bridgeName=str:ovirtmgmt
OVEHOSTED_NETWORK/firewallManager=str:
OVEHOSTED_ENGINE/insecureSSL=none:None
OVEHOSTED_ENGINE/clusterName=str:Default
OVEHOSTED_ENGINE/enableLibgfapi=bool:True
OVEHOSTED_STORAGE/storageDatacenterName=str:hosted_datacenter
OVEHOSTED_STORAGE/domainType=str:glusterfs
OVEHOSTED_STORAGE/glusterBrick=none:None
OVEHOSTED_STORAGE/LunID=none:None
OVEHOSTED_STORAGE/iSCSIPortalIPAddress=none:None
OVEHOSTED_STORAGE/iSCSITargetName=none:None
OVEHOSTED_STORAGE/glusterProvisionedShareName=str:hosted_engine_glusterfs
OVEHOSTED_STORAGE/iSCSIPortalPort=none:None
OVEHOSTED_STORAGE/spUUID=str:00000000-0000-0000-0000-000000000000
OVEHOSTED_STORAGE/storageDomainName=str:hosted_storage
OVEHOSTED_STORAGE/glusterProvisioningEnabled=bool:False
OVEHOSTED_STORAGE/iSCSIPortal=none:None
OVEHOSTED_STORAGE/storageType=none:None
OVEHOSTED_STORAGE/vgUUID=none:None
OVEHOSTED_STORAGE/iSCSIPortalUser=none:None
OVEHOSTED_VDSM/consoleType=str:vnc
OVEHOSTED_VM/vmCDRom=none:None
OVEHOSTED_VM/automateVMShutdown=bool:True
OVEHOSTED_NETWORK/firewallManager=str:

Will move this bug to verified state and log another bug in ovirt-cockpit gdeploy plugin and will add that as dependent on the RHHI bug since it must be supported by ovirt-cockpit gdeploy plugin for the complete flow to be functional.