Bug 1529509 - Trying to upgrade a host via the API fails with fault - 'no upgrades available'
Summary: Trying to upgrade a host via the API fails with fault - 'no upgrades available'
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine-sdk-python
Classification: oVirt
Component: General
Version: ---
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.2.3
: ---
Assignee: Ondra Machacek
QA Contact: meital avital
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-12-28 12:55 UTC by Yaniv Kaul
Modified: 2018-04-18 11:50 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-02-27 11:06:36 UTC
oVirt Team: Infra
Embargoed:
rule-engine: ovirt-4.2+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 87324 0 master MERGED Add upgrade host example 2018-02-09 16:32:44 UTC
oVirt gerrit 87412 0 sdk_4.2 MERGED Add upgrade host example 2018-02-09 16:34:51 UTC

Description Yaniv Kaul 2017-12-28 12:55:15 UTC
Description of problem:
Trying to upgrade a host via the API fails. I think the problem is that unlike the UI, you can try to upgrade without checking for upgrades first.
The error:
2017-12-28 07:23:50,719-05 DEBUG [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor] (default task-8) [6f7a1d27-5bf3-4a3b-bef9-4e7ba46db383] method: get, params: [82735e8c-7015-496f-883c-5ff78aa03897], timeElapsed: 2ms
2017-12-28 07:23:50,725-05 WARN  [org.ovirt.engine.core.bll.hostdeploy.UpgradeHostCommand] (default task-8) [6f7a1d27-5bf3-4a3b-bef9-4e7ba46db383] Validation of action 'UpgradeHost' failed for user admin@internal-authz. Reasons: VAR__ACTION__UPGRADE,VAR__TYPE__HOST,NO_AVAILABLE_UPDATES_FOR_HOST
2017-12-28 07:23:50,726-05 DEBUG [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor] (default task-8) [6f7a1d27-5bf3-4a3b-bef9-4e7ba46db383] method: runAction, params: [UpgradeHost, UpgradeHostParameters:{commandId='7ed03cad-ca41-4c7b-85a8-b62c7dbbe8cb', user='null', commandType='Unknown'}], timeElapsed: 16ms

Code:
def upgrade_host(prefix):
    engine = prefix.virt_env.engine_vm().get_api_v4().system_service()
    vm_service = test_utils.get_vm_service(engine, VM0_NAME)
    host_id = vm_service.get().host.id
    host_service = engine.hosts_service().host_service(host_id)
    # TODO: Enable later. Doesn't really do much though.
    #host_service.upgrade_check()
    host_service.upgrade()
    testlib.assert_true_within_long(
        lambda:
        host_service.get().status == types.HostStatus.UP
    )
    testlib.assert_true_within_long(
        lambda:
        vm_service.get().status == types.VmStatus.UP
    )

Comment 1 Martin Perina 2018-01-02 09:13:46 UTC
There's no way how to distinguish between case 1 (no updates are available) and case 2 (check-for-upgrade flow was not yet executed), but I don't see any benefit to add logic to distinguish that (it's expected to perform 'yum update' prior adding host to engine and check-for-upgrade is executed each day for all hosts).

Also the reason why check-for-upgrade and upgrade flows are separated is that check-for-upgrade can be executed when host is in Up status, while upgrade can be executed only when host is in Maintenance (if you execute upgrade on host in Up, it's switched to Maintenance first). We definitely don't want to switch to host Maintenance and then find out that there are no upgrades.

So preferred way in RESTAPI is to execute check-for-upgrade and then upgrade.

Comment 2 Yaniv Kaul 2018-01-02 12:05:30 UTC
(In reply to Martin Perina from comment #1)
> There's no way how to distinguish between case 1 (no updates are available)
> and case 2 (check-for-upgrade flow was not yet executed), but I don't see
> any benefit to add logic to distinguish that (it's expected to perform 'yum
> update' prior adding host to engine and check-for-upgrade is executed each
> day for all hosts).
> 
> Also the reason why check-for-upgrade and upgrade flows are separated is
> that check-for-upgrade can be executed when host is in Up status, while
> upgrade can be executed only when host is in Maintenance (if you execute
> upgrade on host in Up, it's switched to Maintenance first). We definitely
> don't want to switch to host Maintenance and then find out that there are no
> upgrades.
> 
> So preferred way in RESTAPI is to execute check-for-upgrade and then upgrade.

Right, but since check-for-upgrade does not do anything RESTish, there's no way to avoid this situation.
Perhaps I can catch the exception and gracefully handle it?

Comment 3 Martin Perina 2018-01-02 12:40:08 UTC
(In reply to Yaniv Kaul from comment #2)
> (In reply to Martin Perina from comment #1)
> > There's no way how to distinguish between case 1 (no updates are available)
> > and case 2 (check-for-upgrade flow was not yet executed), but I don't see
> > any benefit to add logic to distinguish that (it's expected to perform 'yum
> > update' prior adding host to engine and check-for-upgrade is executed each
> > day for all hosts).
> > 
> > Also the reason why check-for-upgrade and upgrade flows are separated is
> > that check-for-upgrade can be executed when host is in Up status, while
> > upgrade can be executed only when host is in Maintenance (if you execute
> > upgrade on host in Up, it's switched to Maintenance first). We definitely
> > don't want to switch to host Maintenance and then find out that there are no
> > upgrades.
> > 
> > So preferred way in RESTAPI is to execute check-for-upgrade and then upgrade.
> 
> Right, but since check-for-upgrade does not do anything RESTish, there's no
> way to avoid this situation.

Ahh, sorry I missed that. Ondro, is there some way how to monitor progress (or end of execution) of the asynchronous check-for-upgrade execution?

Comment 4 Yaniv Kaul 2018-01-02 12:47:07 UTC
(In reply to Martin Perina from comment #3)
> (In reply to Yaniv Kaul from comment #2)
> > (In reply to Martin Perina from comment #1)
> > > There's no way how to distinguish between case 1 (no updates are available)
> > > and case 2 (check-for-upgrade flow was not yet executed), but I don't see
> > > any benefit to add logic to distinguish that (it's expected to perform 'yum
> > > update' prior adding host to engine and check-for-upgrade is executed each
> > > day for all hosts).
> > > 
> > > Also the reason why check-for-upgrade and upgrade flows are separated is
> > > that check-for-upgrade can be executed when host is in Up status, while
> > > upgrade can be executed only when host is in Maintenance (if you execute
> > > upgrade on host in Up, it's switched to Maintenance first). We definitely
> > > don't want to switch to host Maintenance and then find out that there are no
> > > upgrades.
> > > 
> > > So preferred way in RESTAPI is to execute check-for-upgrade and then upgrade.
> > 
> > Right, but since check-for-upgrade does not do anything RESTish, there's no
> > way to avoid this situation.
> 
> Ahh, sorry I missed that. Ondro, is there some way how to monitor progress
> (or end of execution) of the asynchronous check-for-upgrade execution?

I was contemplating of catching the event?

Comment 5 Ondra Machacek 2018-01-02 13:18:05 UTC
We need to wait for event codes 839 and 887 which report failure, or event code 885 which report check for upgrade finished.

So something like this should work:

last_event = events_service.list(max=1)[0]

for event in events_service.list(
  from_=int(last_event.id),
  search='host.name=%s' % host_name,
):
  if event.code in [839, 887]:
    raise Exception("Check for upgrade failed")
  elif event.code in [885]:
    break

  time.sleep(2)

Also to avoid failure in the description, it's possible to change the code to something like:

if not host.update_available:
  host.upgrade_check()
  wait_for_upgrade_check()

host_service.upgrade()

Comment 6 Martin Perina 2018-01-17 14:49:50 UTC
Ondro, could you please create an example for PythonSDK to have above code documented?

Comment 10 Martin Perina 2018-02-27 11:06:36 UTC
Example was added to Python SDK, closing ...


Note You need to log in before you can comment on or make changes to this bug.