Description of problem: When registering or discovering bare metal nodes, Ironic is throwing errors. This began around April 10th-13th. Version-Release number of selected component (if applicable): openstack-ironic-api.noarch 2015.1-dev546.gf102adf.el7.centos openstack-ironic-common.noarch 2015.1-dev546.gf102adf.el7.centos openstack-ironic-conductor.noarch openstack-ironic-discoverd.noarch python-ironic-discoverd.noarch 1.1.0-0.99.20150327.1456git.el7.centos python-ironicclient.noarch 0.4.1.25-g3b171c5.el7.centos instack.noarch 0.0.6.4-g57c723a.el7.centos instack-undercloud.noarch 2.0.0-dev1584.g9dbaa26.el7.centos [stack@dhcp-75-176 ~]$ How reproducible: 100%. I tried this in two separate bare metal environments today, one HP and one Dell. Both were fresh installs, and both experienced the same error. Steps to Reproduce: 1. Install instack-undercloud 2. Register nodes 3. Discover nodes Actual results: The first time you run "instack-ironic-deployment --nodes-json instackenv.json --register-nodes" it will result in this error: =============== /usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient. 'python-keystoneclient.', DeprecationWarning) Preparing for deployment... Registering nodes from instackenv.json Node db5dc958-10de-4b5c-9621-9e198cc4e1f5 can not be updated while a state transition is in progress. (HTTP 409) =============== Running the command again appears to succeed, but then there is another state transition error when trying to run discovery, which fails. Expected results: Node registration and discovery should work. Additional info: This worked last week, it was failing Monday Apr 13th and possibly Friday Apr 10th.
Dan, have you seen it recently? I remember us bumping the retry timeout, so it might be gone.
(In reply to Dmitry Tantsur from comment #8) > Dan, have you seen it recently? I remember us bumping the retry timeout, so > it might be gone. No, I haven't seen it once in the last 20-30 deployments I have done. This seems to be fixed.