Bug 1297402
Summary: | initialize macPool on app startup and halt engine if pool cannot be initialized. | ||
---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Martin Mucha <mmucha> |
Component: | Backend.Core | Assignee: | Martin Mucha <mmucha> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Meni Yakove <myakove> |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | 3.6.0 | CC: | bugs, danken, mburman, mmucha |
Target Milestone: | ovirt-4.0.0-alpha | Flags: | rule-engine:
ovirt-4.0.0+
rule-engine: planning_ack+ danken: devel_ack+ rule-engine: testing_ack+ |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-07-05 07:50:18 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Network | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Martin Mucha
2016-01-11 12:25:38 UTC
Martin, can you suggest a way to reproduce it? Lacking reproduction, I think that such a big patch should wait for 3.6.5. delete * from mac_pool_ranges; creates state, where this will occur. Doing so well end up with mac_pool without mac_range (available mac in it's ranges). When engine is started in this db state, it will produce error log during engine start, but engine will start successfully, but then it will fail for every operation contacting macPool. This is really not urgent, since this behavior is part of MAC pool original design(3.2?). This patch was backported only because we thought, it may fix another issue. So I think we can easily abandon it for 3.6 and 3.6.2 This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions Hi Martin, I'm adding this comment after a further discussion with you. I was trying to test this report and verify it, using your suggestion of adding a fake mac_pool without any range to the engine's DB --> engine=# insert into mac_pools (id, name) values ('19479b7e-0ac5-11e6-a513-68f7280ce233', 'fake'); INSERT 0 1 I managed to restart the ovirt-engine service with success --> engine=# \q [root@dhcp160-211 ~]# systemctl restart ovirt-engine [root@dhcp160-211 ~]# systemctl status ovirt-engine ● ovirt-engine.service - oVirt Engine Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2016-04-25 12:27:08 IDT; 6s ago Main PID: 927 (ovirt-engine.py) CGroup: /system.slice/ovirt-engine.service ├─927 /usr/bin/python /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py --redirect-output --systemd=notify start └─957 ovirt-engine -server -XX:+TieredCompilation -Xms1024M -Xmx1024M -Djava.awt.headless=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djsse.enableSNIExtensio... Apr 25 12:27:05 dhcp160-211.scl.lab.tlv.redhat.com systemd[1]: Starting oVirt Engine... Apr 25 12:27:08 dhcp160-211.scl.lab.tlv.redhat.com systemd[1]: Started oVirt Engine. I was expecting it to fail actually, but it seems like it was running. But, i was not able to connect to the engine UI any more... Only after deleting the fake mac_pool from the engine's DB, i was able to log in to the engine UI again. postgres=# \c engine You are now connected to database "engine" as user "postgres". engine=# delete from mac_pools where id = '19479b7e-0ac5-11e6-a513-68f7280ce233'; DELETE 1 engine=# \q [root@dhcp160-211 ~]# systemctl restart ovirt-engine So, i'm not sure that this is what we were actually expecting for.. yes and no. insert into db created invalid state, which should let to situation, when engine does not start up. Thus you should not be able to log in. So I think this bug is verified. Another question is, whether it's OK, that service claims that engine is running if it's not. I don't have sufficient information to judge if it's a bug. But it's very weird and not user (admin) friendly. Based on comments 4 and 5^^ this bus is verified on - 4.0.0-0.0.master.20160423161403.gite38df80.el7.centos oVirt 4.0.0 has been released, closing current release. |