Bug 1049985
| Summary: | RDO: Instance hangs in 'Deleting' state ... multi node (using GRE tenant networks) | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | [Community] RDO | Reporter: | Ronelle Landy <rlandy> | ||||||
| Component: | openstack-nova | Assignee: | RHOS Maint <rhos-maint> | ||||||
| Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | Gabriel Szasz <gszasz> | ||||||
| Severity: | unspecified | Docs Contact: | |||||||
| Priority: | unspecified | ||||||||
| Version: | unspecified | CC: | kchamart, lars, ndipanov, pasik, rbryant, rlandy, yeylon | ||||||
| Target Milestone: | --- | Keywords: | Triaged | ||||||
| Target Release: | Icehouse | ||||||||
| Hardware: | Unspecified | ||||||||
| OS: | Unspecified | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2015-03-20 15:59:31 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Attachments: |
|
||||||||
|
Description
Ronelle Landy
2014-01-08 14:57:48 UTC
Additional note: The instance was created with: m1.medium | 4GB RAM | 2 VCPU | 40.0GB Disk. Had a second report of a test day tester not being able to delete an m1.medium instance (m1.tiny worked) Ronelle, can you please also add Nova api.log from Controller node; and Nova compute.log from your Compute node? Created attachment 847541 [details]
api.log
Created attachment 847543 [details]
compute.log
A gentle note for next time: please upload contextual plain text logs when you're hitting the errors (instead of more than 15 MB of log files). Something like:
(a) Empty out the relevant log files, e.g. $ > /var/log/nova/api.log
(b) Perform the offending test
(c) Upload the (plain text) log files, which would capture
just enough context.
That said, I took a look at the api.log, I see the below:
-----------------
2014-01-08 14:34:53.860 3534 DEBUG keystoneclient.middleware.auth_token [-] Token validation failure. _validate_user_token /usr/lib/python2.7/site-packages/keystoneclient/middleware/auth_token.py:820
2014-01-08 14:34:53.860 3534 TRACE keystoneclient.middleware.auth_token Traceback (most recent call last):
2014-01-08 14:34:53.860 3534 TRACE keystoneclient.middleware.auth_token File "/usr/lib/python2.7/site-packages/keystoneclient/middleware/auth_token.py", line 812, in _validate_user_token
2014-01-08 14:34:53.860 3534 TRACE keystoneclient.middleware.auth_token expires = confirm_token_not_expired(data)
2014-01-08 14:34:53.860 3534 TRACE keystoneclient.middleware.auth_token File "/usr/lib/python2.7/site-packages/keystoneclient/middleware/auth_token.py", line 333, in confirm_token_not_expired
2014-01-08 14:34:53.860 3534 TRACE keystoneclient.middleware.auth_token raise InvalidUserToken('Token authorization failed')
2014-01-08 14:34:53.860 3534 TRACE keystoneclient.middleware.auth_token InvalidUserToken: Token authorization failed
2014-01-08 14:34:53.860 3534 TRACE keystoneclient.middleware.auth_token
2014-01-08 14:34:53.861 3534 DEBUG keystoneclient.middleware.auth_token [-] Marking token 6d5b434481287a617655c46c5b7f7c7d as unauthorized in memcache _cache_store_invalid /usr/lib/python2.7/site-packages/keystoneclient/middleware/auth_token.py:1068
2014-01-08 14:34:53.861 3534 WARNING keystoneclient.middleware.auth_token [-] Authorization failed for token 6d5b434481287a617655c46c5b7f7c7d
2014-01-08 14:34:53.861 3534 INFO keystoneclient.middleware.auth_token [-] Invalid user token - rejecting request
2014-01-08 14:34:53.862 3534 INFO nova.osapi_compute.wsgi.server [req-aa58d3d9-6311-4503-adc5-f8ef31a1f88d 434c50d000f4418ea67c6e5a36681e84 b98a2045e57f4160946583856f762ae4] 10.16.96.113 "GET /v2/b98a2045e57f4160946583856f762ae4/servers/detail?host=cloud-qe-2.idm.lab.bos.redhat.com&all_tenants=True HTTP/1.1" status: 401 len: 195 time: 0.0410540
-----------------
But, I couldn't extract your compute.tar, when I untar it, the resulting compute.log also ended up being a tarball, and when I try untar it again, it looks corrupted. (A nice demonstration why just plain text files with contextual logs are more helpful.)
* * *
And, it's been more than six months this was tested with IceHouse milestole-1 pacakges. I wonder if you're still seeing this behavior with current stable IceHouse packages on Fedora? (latest stable update packages: 2014.1.2)
[Ping, bug triaging here. It's been in NEEDINFO for 5 months. If there's no response from reporter in two weeks, this bug will be closed with INSUFFICIENT_DATA. But, the reporter can reopen the bug if the bug is reproducible again.] (In reply to Kashyap Chamarthy from comment #6) > [Ping, bug triaging here. It's been in NEEDINFO for 5 months. If there's no > response from reporter in two weeks, this bug will be closed with > INSUFFICIENT_DATA. But, the reporter can reopen the bug if the bug is > reproducible again.] This BZ was closed correctly. Apologies for late reply. Works in current release |