Description of problem: Fails to set group virt option on distribute volume type since some of the options are specific to replica volume types. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Create a distribute volume and select optimize for virt store Actual results:
Do we need to create a separate profile for distribute and virt store.
(In reply to Sahina Bose from comment #1) > Do we need to create a separate profile for distribute and virt store. That is one option. But I just tried setting group virt on a plain distribute volume and group virt has the following options that are afr-specific: cluster.eager-lock=enable cluster.quorum-type=auto cluster.server-quorum-type=server cluster.data-self-heal-algorithm=full cluster.locking-scheme=granular cluster.shd-max-threads=8 cluster.shd-wait-qlength=10000 cluster.choose-local=off Turns out this error is being thrown only while setting cluster.shd-max-threads. So in that sense, glusterd behavior is quite inconsistent. We could create a separate profile for distribute-only volume but in that case care must be taken to set the actual group virt options whenever the volume is converted to replicated configuration. -Krutika
(In reply to Krutika Dhananjay from comment #2) > (In reply to Sahina Bose from comment #1) > > Do we need to create a separate profile for distribute and virt store. > > That is one option. > > But I just tried setting group virt on a plain distribute volume and group > virt has the following options that are afr-specific: > > cluster.eager-lock=enable > cluster.quorum-type=auto > cluster.server-quorum-type=server > cluster.data-self-heal-algorithm=full > cluster.locking-scheme=granular > cluster.shd-max-threads=8 > cluster.shd-wait-qlength=10000 > cluster.choose-local=off > > Turns out this error is being thrown only while setting > cluster.shd-max-threads. > So in that sense, glusterd behavior is quite inconsistent. > > We could create a separate profile for distribute-only volume but in that > case care must be taken to set the actual group virt options whenever the > volume is converted to replicated configuration. > > -Krutika Yes, that could be documented for volume conversion procedure.
Sahina, So if there are multiple virt profiles, say virt profile1 for replicate volumes and virt profile2 for distribute volumes - then engine may also need code change to understand which virt profile needs to be invoked based on the volume type. There are couple of places where RHV Manager UI calls virt profile: 1. Volume creation dialog, has a check box 'Optimize for Virt Store' 2. Selecting the volume and do 'Optimize for Virt Store' Have you also thought about these changes ?
(In reply to SATHEESARAN from comment #4) > Sahina, > > So if there are multiple virt profiles, say virt profile1 for replicate > volumes and virt profile2 for distribute volumes - then engine may also need > code change to understand which virt profile needs to be invoked based on > the volume type. > > There are couple of places where RHV Manager UI calls virt profile: > 1. Volume creation dialog, has a check box 'Optimize for Virt Store' > 2. Selecting the volume and do 'Optimize for Virt Store' > > Have you also thought about these changes ? Yes - the engine code will need to be changed as well if we have agreed to create a separate profile. The other option is to ensure the option does not error out when set on a distribute volume.
(In reply to Sahina Bose from comment #5) > (In reply to SATHEESARAN from comment #4) > > Sahina, > > > > So if there are multiple virt profiles, say virt profile1 for replicate > > volumes and virt profile2 for distribute volumes - then engine may also need > > code change to understand which virt profile needs to be invoked based on > > the volume type. > > > > There are couple of places where RHV Manager UI calls virt profile: > > 1. Volume creation dialog, has a check box 'Optimize for Virt Store' > > 2. Selecting the volume and do 'Optimize for Virt Store' > > > > Have you also thought about these changes ? > > Yes - the engine code will need to be changed as well if we have agreed to > create a separate profile. The other option is to ensure the option does not > error out when set on a distribute volume. Setting needinfo on Atin. Atin, The behavior wrt executing volume-set of an option where the translator itself is not in the graph is inconsistent. For example setting some of the afr-specific options on a plain distribute volume succeeds whereas one such option fails if the volume is not replicated. What's the expected behavior? If there is no harm as such in succeeding such a volume-set operation, then maybe we can ask the afr guys to fix issue with the lone option cluster.shd-max-threads which is currently failed.
I don't think we can afford to ignore volume set failures when done for a wrong volume type. The reason cluster.shd-max-threads option fail here is because of an added validation to check if this option is set for replica type or not. {.key = "cluster.shd-max-threads", .voltype = "cluster/replicate", .op_version = GD_OP_VERSION_3_7_12, .flags = VOLOPT_FLAG_CLIENT_OPT, .validate_fn = validate_replica}, The other replica options are missing that validation which allows such options to go through. I believe such additional validation was added to address bugs raised by QE/GSS to block such operations for incompatible volume type, so we can't afford to revert back that additional validation. IMO, having a separate group profile is the way forward to avoid more complications.
Also we need to make sure granular-entry-heal - which is set during cockpit-based installation - is not set on Dalton volumes.
(In reply to Krutika Dhananjay from comment #8) > Also we need to make sure granular-entry-heal - which is set during > cockpit-based installation - is not set on Dalton volumes. We can make sure that this the distribute volume will not have the granular-entry-heal turned on. I have raised a bug to support single brick creation with gluster-ansible - BZ https://bugzilla.redhat.com/show_bug.cgi?id=1653575 - This will also make sure that the granular-entry-heal option is not set on the distributed volume
Additional information here is that the new virt profile for distributed volume is now available with the name 'distributed-virt' The following way used to optimize the distribute volume for virt store usecase: # gluster volume set <vol> group distributed-virt This is fixed in RHGS 3.4 update3 ( Downstream )
Verified with RHV 4.3.5.3 1. Created the distribute volume from RHV Manager UI 2. Optimized this volume for virt store All options are set on the volume as expected. Note: Error seen while enabling 'granular-entry-heal' on this volume which is tracked as part of bug - https://bugzilla.redhat.com/show_bug.cgi?id=1673277
This bugzilla is included in oVirt 4.3.5 release, published on July 30th 2019. Since the problem described in this bug report should be resolved in oVirt 4.3.5 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.