Bug 1228317 - rdo-manager: Running ahc-match when there's a registered host with no disks configured fails: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
Summary: rdo-manager: Running ahc-match when there's a registered host with no disks c...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: RDO
Classification: Community
Component: rdo-manager
Version: Kilo
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
: Kilo
Assignee: Imre Farkas
QA Contact: yeylon@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-06-04 15:10 UTC by Alexander Chuzhoy
Modified: 2016-09-20 05:03 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-06-18 18:37:58 UTC
Embargoed:


Attachments (Terms of Use)
output from ironic node-show (51.80 KB, text/plain)
2015-06-04 15:33 UTC, Alexander Chuzhoy
no flags Details
state-spec (468 bytes, application/x-gzip)
2015-06-18 16:03 UTC, Alexander Chuzhoy
no flags Details

Description Alexander Chuzhoy 2015-06-04 15:10:26 UTC
rdo-manager: Running ahc-match when there's a registered host with no disks configured fails: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute

The error is:
ERROR:ahc_tools.match:Failed to match node uuid: 7fc7bd62-a5e7-409e-81be-689d5574cb5f. Error was: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
ERROR:ahc_tools.match:The following nodes did not match any profiles and will not be updated: 7fc7bd62-a5e7-409e-81be-689d5574cb5f


Environment:
instack-undercloud-2.1.0-4.el7ost.noarch
ahc-tools-0.1.1-2.el7ost.noarch
openstack-ironic-discoverd-1.1.0-3.el7ost.noarch

Steps to reproduce:
1. First register/discover the node(s)

2. Run "which instack-ironic-deployment"

3. Run "source /home/stack/stackrc"

4. Create the file /etc/edeploy/default.cmdb with the following content:
 {'target_raid_configuration': {
     'logical_disks': ({'controller': 'RAID.Integrated.1-1',
                            number_of_physical_disks': 2,
                            'disk_type': 'hdd',
                            'is_root_volume': 'true',
                            'raid_level': '1',
                            'size_gb': 50,
                            'volume_name': 'root_volume'},)}}

5. Add the drac driver/details for the same host you check the bios for (use "ironic node-show <nodeID>" to get the IP of the management).

ironic node-update $NODE add driver_info/drac_username='username'                                                                                                                                               |
ironic node-update $NODE add driver_info/drac_password='password'                                                                                                                                             |
ironic node-update $NODE add driver_info/drac_host='management IP'                                                                                                                                            |
ironic node-update $NODE add driver='pxe_drac'  

6. Run ironic node-vendor-passthru --http-method GET $NODE list_virtual_disks

7. Run "sudo yum install -y ahc-tools"

8. Run "sudo -E ahc-match"

Result:
ERROR:ahc_tools.match:Failed to match node uuid: 7fc7bd62-a5e7-409e-81be-689d5574cb5f. Error was: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
ERROR:ahc_tools.match:The following nodes did not match any profiles and will not be updated: 7fc7bd62-a5e7-409e-81be-689d5574cb5f


Expected result:
No errors.

Comment 1 Imre Farkas 2015-06-04 15:24:55 UTC
Could you please post the output of ironic node-show command?

Comment 2 Alexander Chuzhoy 2015-06-04 15:33:21 UTC
Created attachment 1034782 [details]
output from ironic node-show

Comment 3 Imre Farkas 2015-06-18 14:08:29 UTC
I couldn't reproduce it based on the discovered node properties. Could you please also include the content of the state and the *.spec files?

Comment 4 Alexander Chuzhoy 2015-06-18 16:03:06 UTC
So the default.cmdb file needs to be placed now under /etc/ahc-tools/edeploy.
The issue sill reproduces.

The file are attached in the gzipped tar.

Comment 5 Alexander Chuzhoy 2015-06-18 16:03:27 UTC
Created attachment 1040552 [details]
state-spec

Comment 6 Alexander Chuzhoy 2015-06-18 17:29:11 UTC
This isn't a bug.

The problem is in the content of the default.cmdb file. The culprits were found:

 {'target_raid_configuration': {
     'logical_disks': ({'controller': 'RAID.Integrated.1-1',
                            number_of_physical_disks': 2,
                            'disk_type': 'hdd',
                            'is_root_volume': 'true',
                            'raid_level': '1',
                            'size_gb': 50,
                            'volume_name': 'root_volume'},)}}



1. Missing a quote before the "number_of_physical_disks" string.
2. This should be a list:
[ {'target_raid_configuration': {
     'logical_disks': ({'controller': 'RAID.Integrated.1-1',
                            'number_of_physical_disks': 2,
                            'disk_type': 'hdd',
                            'is_root_volume': 'true',
                            'raid_level': '1',
                            'size_gb': 50,
                            'volume_name': 'root_volume'},)}}]


Note You need to log in before you can comment on or make changes to this bug.