Bug 1041103 - [RFE][nova]: Support remove a nova compute from nova compute
Summary: [RFE][nova]: Support remove a nova compute from nova compute
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: unspecified
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: Upstream M3
: 5.0 (RHEL 7)
Assignee: RHOS Maint
QA Contact: Omri Hochman
URL: https://blueprints.launchpad.net/nova...
Whiteboard: upstream_milestone_icehouse-3 upstrea...
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-12-12 13:38 UTC by RHOS Integration
Modified: 2019-09-09 17:02 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
The Compute API now exposes a mechanism for permanently removing decommissioned compute nodes. Previously, decommissioned nodes would continue to be listed even if the compute service had been disabled and the system re-provisioned. The removal functionality is provided by the "ExtendedServicesDelete" API extension.
Clone Of:
Environment:
Last Closed: 2014-07-08 15:27:21 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:0853 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Enhancement - Compute 2014-07-08 19:22:38 UTC

Description RHOS Integration 2013-12-12 13:38:41 UTC
Cloned from launchpad blueprint https://blueprints.launchpad.net/nova/+spec/remove-nova-compute.

Description:

Current in openstack, there is no command to remove a nova compute node from openstack cluster. So if customer do not want to use the nova compute, then once nova compute service was down on this node or this node was re-provisioned, "nova service-list" or "nova-manage service list" still show the node.

It is better that introduce a new REST API to remove nova compute.


Specification URL (additional information):

None

Comment 4 Xavier Queralt 2014-04-07 15:04:05 UTC
It looks like this feature has not been implemented in the nova CLI. It will be a bit harder to verify but it is not impossible:

1. With an AIO deployment, check the currently available services:

[stack@devstack1 devstack]$ nova service-list
+----+------------------+-----------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host      | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-conductor   | devstack1 | internal | enabled | up    | 2014-04-07T14:09:41.000000 | -               |
| 2  | nova-cert        | devstack1 | internal | enabled | up    | 2014-04-07T14:09:43.000000 | -               |
| 3  | nova-network     | devstack1 | internal | enabled | up    | 2014-04-07T14:09:34.000000 | -               |
| 4  | nova-scheduler   | devstack1 | internal | enabled | up    | 2014-04-07T14:09:36.000000 | -               |
| 5  | nova-compute     | devstack1 | nova     | enabled | up    | 2014-04-07T14:09:41.000000 | -               |
| 6  | nova-consoleauth | devstack1 | internal | enabled | up    | 2014-04-07T14:09:41.000000 | -               |
+----+------------------+-----------+----------+---------+-------+----------------------------+-----------------+

2. Stop the compute service (it should work with any other service) and wait for a while until it changes to the "down" state:

[stack@devstack1 devstack]$ sudo service openstack-nova-compute stop
[stack@devstack1 devstack]$ nova service-list
+----+------------------+-----------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host      | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-conductor   | devstack1 | internal | enabled | up    | 2014-04-07T14:12:11.000000 | -               |
| 2  | nova-cert        | devstack1 | internal | enabled | up    | 2014-04-07T14:12:03.000000 | -               |
| 3  | nova-network     | devstack1 | internal | enabled | up    | 2014-04-07T14:12:04.000000 | -               |
| 4  | nova-scheduler   | devstack1 | internal | enabled | up    | 2014-04-07T14:12:06.000000 | -               |
| 5  | nova-compute     | devstack1 | nova     | enabled | down  | 2014-04-07T14:11:11.000000 | -               |
| 6  | nova-consoleauth | devstack1 | internal | enabled | up    | 2014-04-07T14:12:11.000000 | -               |
+----+------------------+-----------+----------+---------+-------+----------------------------+-----------------+

3. Save the id of the service you want to delete:

[stack@devstack1 devstack1]$ SERVICE_ID=5

4. With the openstack-client (if not installed see [1]) get the endpoint and the auth token to be able to talk with the API:

[stack@devstack1 devstack1]$ COMPUTE_ENDPOINT=$(openstack endpoint show compute -f value)
[stack@devstack1 devstack1]$ AUTH_TOKEN=$(openstack token create -c id -f value)

5. Run the following command, which uses curl, to delete the service from the service list:

[stack@devstack1 devstack1]$ curl -i "$COMPUTE_ENDPOINT/os-services/$SERVICE_ID" -X DELETE -H "X-Auth-Project-Id: admin" -H "X-Auth-Token: $AUTH_TOKEN"

6. Check again the available services and notice that the nova-compute service is not listed any more:

[stack@devstack1 devstack]$ nova service-list
+----+------------------+-----------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host      | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-conductor   | devstack1 | internal | enabled | up    | 2014-04-07T14:12:11.000000 | -               |
| 2  | nova-cert        | devstack1 | internal | enabled | up    | 2014-04-07T14:12:03.000000 | -               |
| 3  | nova-network     | devstack1 | internal | enabled | up    | 2014-04-07T14:12:04.000000 | -               |
| 4  | nova-scheduler   | devstack1 | internal | enabled | up    | 2014-04-07T14:12:06.000000 | -               |
| 6  | nova-consoleauth | devstack1 | internal | enabled | up    | 2014-04-07T14:12:11.000000 | -               |
+----+------------------+-----------+----------+---------+-------+----------------------------+-----------------+


[1] To install the openstack-client for your user, you'll need the python-pip package. Once installed, run "pip install --user python-openstackclient"

Comment 5 Omri Hochman 2014-04-08 13:21:32 UTC
Tested on RHEL7.0 with RDO Ice_House (openstack-nova-2014.1-0.13.b3) http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-7/



Before: 
--------
nova service-list
-+----------------------------+-----------------+
| Binary           | Host                          | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-------------------------------+----------+---------+-------+----------------------------+-----------------+
| nova-consoleauth | puma36.scl.lab.tlv.redhat.com | internal | enabled | up    | 2014-04-08T13:05:03.000000 | -               |
| nova-scheduler   | puma36.scl.lab.tlv.redhat.com | internal | enabled | up    | 2014-04-08T13:05:03.000000 | -               |
| nova-conductor   | puma36.scl.lab.tlv.redhat.com | internal | enabled | up    | 2014-04-08T13:05:02.000000 | -               |
| nova-cert        | puma36.scl.lab.tlv.redhat.com | internal | enabled | up    | 2014-04-08T13:05:04.000000 | -               |
| nova-compute     | puma36.scl.lab.tlv.redhat.com | nova     | enabled | down  | 2014-04-08T13:03:52.000000 | -               |
+------------------+-------------------------------+----------+---------+-------

[root@puma36 ~(keystone_admin)]# curl -i "$COMPUTE_ENDPOINT/os-services/$SERVICE_ID" -X DELETE -H "X-Auth-Project-Id: admin" -H "X-Auth-Token: $AUTH_TOKEN"
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: application/json
X-Compute-Request-Id: req-fdfbffa3-d89e-4207-8ff5-78183b6733e5
Date: Tue, 08 Apr 2014 13:15:22 GMT


After:
-------
[root@puma36 ~(keystone_admin)]# nova service-list
+------------------+-------------------------------+----------+---------+-------+----------------------------+-----------------+
| Binary           | Host                          | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-------------------------------+----------+---------+-------+----------------------------+-----------------+
| nova-consoleauth | puma36.scl.lab.tlv.redhat.com | internal | enabled | up    | 2014-04-08T13:15:44.000000 | -               |
| nova-scheduler   | puma36.scl.lab.tlv.redhat.com | internal | enabled | up    | 2014-04-08T13:15:43.000000 | -               |
| nova-conductor   | puma36.scl.lab.tlv.redhat.com | internal | enabled | up    | 2014-04-08T13:15:42.000000 | -               |
| nova-cert        | puma36.scl.lab.tlv.redhat.com | internal | enabled | up    | 2014-04-08T13:15:44.000000 | -               |
+------------------+-------------------------------+----------+---------+-------+----------------------------+-----------------+

Comment 6 Xavier Queralt 2014-04-09 13:45:49 UTC
The service-delete subcommand has been added in novaclient (See [1]) and it should be included in the next upstream release (probably 2.18.0). Once available, deleting a service from the database should be as easy as:

$ nova service-delete <service_id>

[1] https://blueprints.launchpad.net/python-novaclient/+spec/support-delete-service

Comment 9 errata-xmlrpc 2014-07-08 15:27:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0853.html


Note You need to log in before you can comment on or make changes to this bug.