Bug 1124999 - [RHSC] not clear how to run software update of cluster with console
Summary: [RHSC] not clear how to run software update of cluster with console
Keywords:
Status: CLOSED CANTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Sahina Bose
QA Contact: RHS-C QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-30 19:39 UTC by Martin Bukatovic
Modified: 2018-01-29 15:11 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-01-29 15:11:11 UTC
shtripat: needinfo-


Attachments (Terms of Use)
rhsc-screenshot (132.11 KB, image/png)
2014-07-30 19:39 UTC, Martin Bukatovic
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Bugzilla 850441 None None None Never
Red Hat Bugzilla 1189285 None None None Never

Internal Links: 850441 1189285

Description Martin Bukatovic 2014-07-30 19:39:12 UTC
Created attachment 922712 [details]
rhsc-screenshot

Description of problem
======================

RHSC doesn't help gluster admin with software updates in any way.

Even though that Console Administration Guide notes the possibility of software
updates and hints the user how it may work a little, it doesn't work and the
guide is missing clear description of the software update workflow.

Version-Release number of selected component (if applicable)
============================================================

On the RHSC server:

~~~
# rpm -qa | grep rhsc
rhsc-webadmin-portal-3.0.0-0.12.el6rhs.noarch
rhsc-beaker-tasks-rhs-tests-rhsc-rhsc-setup-0.0.1-2.noarch
rhsc-setup-plugin-ovirt-engine-common-3.0.0-0.12.el6rhs.noarch
rhsc-branding-rhs-3.0.0-1.0.el6rhs.noarch
rhsc-3.0.0-0.12.el6rhs.noarch
rhsc-monitoring-uiplugin-0.1.1-1.el6rhs.noarch
rhsc-tools-3.0.0-0.12.el6rhs.noarch
rhsc-dbscripts-3.0.0-0.12.el6rhs.noarch
rhsc-doc-3.0.0-1.el6eng.noarch
rhsc-lib-3.0.0-0.12.el6rhs.noarch
rhsc-backend-3.0.0-0.12.el6rhs.noarch
rhsc-sdk-python-3.0.0.0-0.2.el6rhs.noarch
rhsc-setup-plugin-ovirt-engine-3.0.0-0.12.el6rhs.noarch
redhat-access-plugin-rhsc-3.0.0-1.el6rhs.noarch
rhsc-cli-3.0.0.0-0.2.el6rhs.noarch
rhsc-setup-base-3.0.0-0.12.el6rhs.noarch
rhsc-setup-plugins-3.0.0-0.2.el6rhs.noarch
rhsc-setup-3.0.0-0.12.el6rhs.noarch
rhsc-restapi-3.0.0-0.12.el6rhs.noarch
rhsc-log-collector-3.0.0-4.0.el6rhs.noarch
~~~

Console Administration Guide:
https://documentation-devel.engineering.redhat.com/site/documentation/en-US/Red_Hat_Storage/3/html-single/Console_Administration_Guide/index.html

How reproducible
================

100%

Steps to Reproduce
==================

Prereguisetes (initial state):

 * Cluster contains 4 RHS nodes which hosts a brick, all nodes are up
 * Gluster volume is enabled
 * RHSC is outside of trusted storage pool on a dedicated server
 * Updates for gluster are available in RHS channel on RHS nodes

Use case one
------------

1. Move one node into maintanence mode via RHSC
2. In RHSC web interface, find The General tab on the Details pane and clik on
   the "If you wish to upgrade or reinstall it click here" link.
3. Click ok in the "install host" window to start the update

Actual results of use case one
------------------------------

When a node is flagged for maintenance, the following message is shown in
The General tab on the Details pane:

~~~
Action Items
Host is in maintenance mode, you can Activate it by pressing the Activate
button. If you wish to upgrade or reinstall it click here.
~~~

(see screenshot)

But it does nothing: after a while, the node is show back as
*active* without performing the update. Yum log shows nothing.

Expected results of use case one
--------------------------------

The update is performed or the error informing the user why the update fails
is shown.

Use case two
------------

1. Move one node into maintanence mode
2. Log on the node via ssh and run 'yum update'

Actual results of use case two
------------------------------

Yum update fails because the maintenance mode doesn't trigger any change on the
node. This means that all gluster daemons are still running which prevents the
update to complete (this is an expected behaviour).

Expected results of use case two
--------------------------------

Not sure, but RHSC Admin Guide should include better description of the
maintenance mode than this:

> To perform certain actions, you may need to move hosts into maintenance mode.

Additional info
===============

Console Admin Guide[1] states that:

> Maintaining the cluster, including performing updates and monitoring usage
> and performance to keep the cluster responsive to changing needs and loads. 

From this, it's not clear if the text means software updates or updating the
cluster configuration.

But section "⁠4.5.1 Viewing General Host Informatio"states:

> The General tab on the Details pane provides information on individual hosts,
> including hardware and software versions, and available updates. 

Based on this statement, it's reasonable to assume that customers would think
that RHSC provides some way to run software updates.

Comment 3 Prasanth 2014-07-31 05:56:45 UTC
AFAIK, option to upgrade the RHS cluster from one version to another is still not available in RHSC and the corresponding RFE BZ is still in NEW state. See bug [1] for more details.

If the doc says othwerwise or has any confusing statements, it has to be fixed until the upgrade feature is actually implemented in the UI.

Anyways, we will wait from the dev team to comment on this or provide any further updates.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=850441


-Prasanth

Comment 4 Martin Bukatovic 2015-01-30 10:23:13 UTC
Since BZ 850441 was just closed, I just would like to make clear that this BZ is about software updates (imagine running `yum update` to get new packages released via async or zstream errata) and not about upgrading gluster to newer incompatible version (eg. from RHSS 2.1 to 3.0).

Comment 6 Sahina Bose 2018-01-29 15:11:11 UTC
Thank you for your report. This bug is filed against a component for which no further new development is being undertaken


Note You need to log in before you can comment on or make changes to this bug.