Bug 922778 - Power management fails when enabled during node approval
Summary: Power management fails when enabled during node approval
Keywords:
Status: CLOSED DUPLICATE of bug 1048356
Alias: None
Product: oVirt
Classification: Retired
Component: ovirt-engine-webadmin
Version: 3.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 3.5.0
Assignee: Eli Mesika
QA Contact: sefi litmanovich
URL:
Whiteboard: infra
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-03-18 13:43 UTC by Netbulae
Modified: 2016-02-10 19:30 UTC (History)
8 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2014-05-14 20:41:34 UTC
oVirt Team: Infra
Embargoed:


Attachments (Terms of Use)
DC and Cluster still blank in fresh 3.4 install (31.32 KB, image/png)
2014-04-02 11:34 UTC, Netbulae
no flags Details
Node2 power management dialog (34.63 KB, image/png)
2014-04-07 08:38 UTC, Netbulae
no flags Details
screenshot after pm_proxy_preferences update (17.27 KB, image/png)
2014-04-08 15:40 UTC, Netbulae
no flags Details

Description Netbulae 2013-03-18 13:43:44 UTC
Description of problem:

I installed the 3 nodes from iso and registered them to oVirt. In ovirt I approved them and added the Power Management settings. The source list is empty and Test fails with the message there is no other host to test it from. 

When I approve the nodes AND enable powermanagement at the same step, there is no way to get the powermanagement working as there is nothing in the source field. Even when I approved all three nodes, oVirt keeps telling me there is no node to test it from. I can enable/disable powermanagement all I want after that, but the source list stays empty.

When I approve and activate the nodes and then set the powermanagement info, everything works perfect and I see the entries "cluster" and "dc"

Comment 1 Eli Mesika 2013-12-24 12:54:53 UTC
(In reply to Netbulae from comment #0)
Is that still occuring
If so, please attach 

1) screenshots
2) engine.log

Comment 2 Eli Mesika 2014-02-05 20:24:32 UTC
may be related to https://bugzilla.redhat.com/show_bug.cgi?id=1048356
but anyway the fencing in that case should be still working 
closing that as the reporter not provided the requested data

Comment 3 Netbulae 2014-04-02 11:34:36 UTC
Created attachment 881736 [details]
DC and Cluster still blank in fresh 3.4 install

I installed a fresh 3.4 and added two nodes with power management enabled. De dialog is blank as can be seen in the screenshot. The test button reports everything "Ok"

Comment 4 Eli Mesika 2014-04-02 12:07:21 UTC
Please provide the output of the following queries :

1) select vds_name, pm_proxy_preferences from vds_static;


2) select * from schema_version where script ilike '%proxy_preferences%';


Thanks

Comment 5 Netbulae 2014-04-02 12:26:48 UTC
engine=# select vds_name, pm_proxy_preferences from vds_static;
 vds_name | pm_proxy_preferences 
----------+----------------------
 node1 | 
 node2 | 
(2 rows)

engine=# select * from schema_version where script ilike '%proxy_preferences%';
 id  | version  |                        script                        |             checksum             | installed_by |         started_at         |          ended_at          |   state   | current | comment 
-----+----------+------------------------------------------------------+----------------------------------+--------------+----------------------------+----------------------------+-----------+---------+---------
  19 | 03020180 | upgrade/03_02_0180_add_pm_proxy_preferences.sql      | 832f8501fabdad9fe9f3523567223c83 | engine       | 2014-03-28 11:50:01.150114 | 2014-03-28 11:50:01.412819 | INSTALLED | f       | 
 207 | 03040570 | upgrade/03_04_0570_set_proxy_preferences_default.sql | ad0d2f486704100ec927a59b93c83d56 | engine       | 2014-03-28 11:50:38.002931 | 2014-03-28 11:50:38.028549 | INSTALLED | f       | 
(2 rows)

Comment 6 Eli Mesika 2014-04-02 12:42:27 UTC
Can you please paste the result of the following query as well, thanks :

select * from vdc_options where option_name ilike 'FenceProxyDefaultPreferences';

Comment 7 Netbulae 2014-04-02 13:48:05 UTC
engine=# select * from vdc_options where option_name ilike 'FenceProxyDefaultPreferences';
 option_id |         option_name          | option_value | version 
-----------+------------------------------+--------------+---------
        63 | FenceProxyDefaultPreferences | cluster,dc   | general
(1 row)

Comment 8 Eli Mesika 2014-04-02 14:08:38 UTC
managed to reproduce , reopening

Comment 9 Eli Mesika 2014-04-02 14:26:25 UTC
Sorry, my mistake 

I had tested 

1) Creating a new Host

2) Editing an existing host with no PM defined

When the New/edit Host popup displayed I had in the "Source" field a list of two items : "cluster" and "dc"

Can you please describe the full scenario you are doing since I am not able to get an empty "source" list

Comment 10 Netbulae 2014-04-02 14:43:55 UTC
These are all the steps:

Clean Centos 6.5 install
yum update
reboot
install repo's
yum install ovirt-engine
engine-setup

install ovirt node iso from pxe
login with admin
enable dhcp
connect to ovirt-engine
exit

login to ovirt engine web interface
approve node, enable power management, fill in power management required fields
approve node2, enable power management, fill in power management required fields

That's it, I haven't done anything else with this install yet.

I'll do it again tomorrow as I have to check some things for other issues.

Comment 11 Eli Mesika 2014-04-06 14:41:51 UTC
Any updates ?

Comment 12 Netbulae 2014-04-07 08:36:42 UTC
I did a fresh install but have exactly the same issue. The source is blank on both power management tabs, but the test button works ok.

Comment 13 Netbulae 2014-04-07 08:38:17 UTC
Created attachment 883503 [details]
Node2 power management dialog

Still blank

Comment 14 Eli Mesika 2014-04-08 14:16:55 UTC
(In reply to Netbulae from comment #13)
> Created attachment 883503 [details]
> Node2 power management dialog
> 
> Still blank

please run the following on your database and let me know if you see the values after doing that and restart the oVirt engine 

> update vds_static set pm_proxy_preferences = 'cluster,dc' where pm_enabled;

Comment 15 Netbulae 2014-04-08 15:40:33 UTC
Created attachment 884121 [details]
screenshot after pm_proxy_preferences update

It looks ok now

update vds_static set pm_proxy_preferences = 'cluster,dc' where pm_enabled;
UPDATE 1

Comment 16 Eli Mesika 2014-05-14 20:41:34 UTC
Was resolved as part of BZ 1048356 resolving
Since this was a fix in one of the upgrade scrips, it affects only upgrades done after the fix was merged
A specific work-around was suggested and worked to workaround this problem manually (see comment 14 and comment 15)

*** This bug has been marked as a duplicate of bug 1048356 ***


Note You need to log in before you can comment on or make changes to this bug.