Bug 1410189 - 04_01_0010_add_mac_pool_id_to_vds_group.sql Fails upgrade if engine has cluster not attached to DC
Summary: 04_01_0010_add_mac_pool_id_to_vds_group.sql Fails upgrade if engine has clust...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: Database.Core
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ovirt-4.1.0-beta
: 4.1.0.2
Assignee: Martin Mucha
QA Contact: Michael Burman
URL:
Whiteboard:
: 1412556 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-04 17:27 UTC by sefi litmanovich
Modified: 2017-02-01 14:52 UTC (History)
6 users (show)

Fixed In Version: http://resources.ovirt.org/repos/ovirt/experimental/4.1/latest.tested/rpm/el7/noarch/ovirt-engine-4.1.0-0.4.master.20170115090623.git8e588d9.el7.centos.noarch.rpm
Doc Type: If docs needed, set a value
Doc Text:
upgrade script 04_01_0010_add_mac_pool_id_to_vds_group.sql assumed, that there cannot exist clusters without relation to some data center. Such clusters won't be able to run any VM and would have other serious problems, therefore it was assumed, that no one has this setup. This assumption was wrong and because of that db script failed on creation not null db constraint. After this fix upgrade works also for environments containing such clusters.
Clone Of:
Environment:
Last Closed: 2017-02-01 14:52:59 UTC
oVirt Team: Network
Embargoed:
rule-engine: ovirt-4.1+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 70107 0 master MERGED core: use default mac pool for clusters without DC 2017-01-13 09:01:38 UTC
oVirt gerrit 70108 0 ovirt-engine-4.1 MERGED core: use default mac pool for clusters without DC 2017-01-15 09:03:20 UTC

Description sefi litmanovich 2017-01-04 17:27:01 UTC
Description of problem:
Having a cluster not attached to any data center is not common, or recommended or helpful in any way, but it's possible and I saw it happen in one env here in QE.
If that happens that in the cluster column in DB storage_pool_id will be None for this cluster and that will cause this upgrade db_script to fail the engine upgrade - 04_01_0010_add_mac_pool_id_to_vds_group.sql:

SELECT fn_db_add_column('cluster',
                        'mac_pool_id',
                        'UUID REFERENCES mac_pools (id)');

UPDATE
  cluster AS c
SET
  mac_pool_id = (
      SELECT
        sp.mac_pool_id
      FROM
        storage_pool sp
      WHERE
        sp.id = c.storage_pool_id
    );

ALTER TABLE cluster ALTER COLUMN mac_pool_id SET NOT NULL;


solution is easy, just remove cluster and run engine-setup again.


Version-Release number of selected component (if applicable):
This was found when running engine-setup with one of the 4.1 upstream versions (not sure which, but master branch still has this same file so nothing changed), setup was done with db restored from a 4.0 env and that's why this was possible.

How reproducible:
always

Steps to Reproduce:
1. Create a data center in rhevm 4.0
2. Create a cluster in that data center.
3. Remove the data center leaving only the cluster.
4. Backup engine with engine-backup tool.
5. Install 4.1 rpms and all needed to install rhevm 4.1.
6. Restore engine database using engine-backup tool.
7. Run engine-setup.

Actual results:
Setup fails with the reason being '04_01_0010_add_mac_pool_id_to_vds_group.sql' got Null for storage_pool_id (I did not keep the log from that run, no need to re produce it's quite obvious).

Expected results:
engine-setup is successful.

Additional info:

Comment 1 Dan Kenigsberg 2017-01-23 13:11:27 UTC
*** Bug 1412556 has been marked as a duplicate of this bug. ***

Comment 2 Michael Burman 2017-01-24 07:28:32 UTC
Verified on - rhevm-4.1.0.2-0.1.el7.noarch

Upgraded from 4.0.6.3-0.1.el7ev >> rhevm-4.1.0.2-0.1.el7.noarch with few clusters without DCs in the setup. Upgrade is passed with success. 
Each cluster without a DC got it's default MAC pool range.


Note You need to log in before you can comment on or make changes to this bug.