Description of problem: After successfull FFU upgrade from RHOS-10 to 13, in the second tempest run, discover-tempest-config command fails with below error: ServiceError: Request on service 'volume' with url 'http://10.0.0.103:8776/v1/9994c7ff34c04d8aa22fd663b20f72e7/extensions' failed with code 404 Steps to Reproduce: 1. Deploy RHOS 10 2. FFU upgrade to RHOS-13 3. Run discover-tempest-config post upgrade to RHOS-13 Actual results: "cmd": "test -e ~/.virtualenvs/.tempest_conf/bin/activate && source ~/.virtualenvs/.tempest_conf/bin/activate\n source ~/keystonerc\n /usr/bin/discover-tempest-config --deployer-input ~/ir-tempest-deployer-input.conf --debug -v --create identity.uri $OS_AUTH_URL identity.admin_password $OS_PASSWORD scenario.img_dir ~/tempest_13/etc compute.max_microversion latest compute.min_compute_nodes 2 image.http_image http://rhos-qe-mirror-tlv.usersys.redhat.com/images/cirros-0.4.0-x86_64-uec.tar.gz orchestration.stack_owner_role heat_stack_owner validation.run_validation true compute-feature-enabled.vnc_console true compute-feature-enabled.live_migration_back_and_forth true compute-feature-enabled.live_migration_paused_instances true compute-feature-enabled.block_migration_for_live_migration true compute-feature-enabled.live_migration true compute-feature-enabled.resize true compute-feature-enabled.block_migrate_cinder_iscsi true compute-feature-enabled.scheduler_available_filters all compute-feature-enabled.volume_backed_live_migration true compute-feature-enabled.cold_migration true compute-feature-enabled.personality true compute-feature-enabled.config_drive true identity.region regionOne --out ~/tempest_13/etc/tempest.conf", "delta": "0:00:05.264935", "end": "2019-10-29 20:48:41.561826", "rc": 1, "start": "2019-10-29 20:48:36.296891" 2019-10-29 20:48:40.556 2023 DEBUG config_tempest.constants [-] Setting [service_available] ceilometer = True set /usr/lib/python2.7/site-packages/config_tempest/tempest_conf.py:107 2019-10-29 20:48:41.504 2023 DEBUG config_tempest.constants [-] Setting [service_available] cinder = True set /usr/lib/python2.7/site-packages/config_tempest/tempest_conf.py:107 2019-10-29 20:48:41.508 2023 CRITICAL tempest [-] Unhandled error: ServiceError: Request on service 'volume' with url 'http://10.0.0.103:8776/v1/9994c7ff34c04d8aa22fd663b20f72e7/extensions' failed with code 404 2019-10-29 20:48:41.508 2023 ERROR tempest Traceback (most recent call last): 2019-10-29 20:48:41.508 2023 ERROR tempest File "/usr/bin/discover-tempest-config", line 10, in <module> 2019-10-29 20:48:41.508 2023 ERROR tempest sys.exit(main()) 2019-10-29 20:48:41.508 2023 ERROR tempest File "/usr/lib/python2.7/site-packages/config_tempest/main.py", line 602, in main 2019-10-29 20:48:41.508 2023 ERROR tempest verbose=args.verbose 2019-10-29 20:48:41.508 2023 ERROR tempest File "/usr/lib/python2.7/site-packages/config_tempest/main.py", line 524, in config_tempest 2019-10-29 20:48:41.508 2023 ERROR tempest services = Services(clients, conf, credentials) 2019-10-29 20:48:41.508 2023 ERROR tempest File "/usr/lib/python2.7/site-packages/config_tempest/services/services.py", line 42, in __init__ 2019-10-29 20:48:41.508 2023 ERROR tempest self.discover() 2019-10-29 20:48:41.508 2023 ERROR tempest File "/usr/lib/python2.7/site-packages/config_tempest/services/services.py", line 104, in discover 2019-10-29 20:48:41.508 2023 ERROR tempest service.set_extensions() 2019-10-29 20:48:41.508 2023 ERROR tempest File "/usr/lib/python2.7/site-packages/config_tempest/services/volume.py", line 26, in set_extensions 2019-10-29 20:48:41.508 2023 ERROR tempest body = self.do_get(self.service_url + '/extensions') 2019-10-29 20:48:41.508 2023 ERROR tempest File "/usr/lib/python2.7/site-packages/config_tempest/services/base.py", line 67, in do_get 2019-10-29 20:48:41.508 2023 ERROR tempest " with code %d" % (self.s_type, url, r.status)) 2019-10-29 20:48:41.508 2023 ERROR tempest ServiceError: Request on service 'volume' with url 'http://10.0.0.103:8776/v1/9994c7ff34c04d8aa22fd663b20f72e7/extensions' failed with code 404 2019-10-29 20:48:41.508 2023 ERROR tempest Expected results: Additional info:
Hi Archit, I think the job sources wrong credentials, from the output you shared I can see "source ~/keystonerc" which means that discover-tempest-config uses credentials for 10 and not for 13. Also we can see in the traceback that discover-tempest-config contacted v1 endpoint (result of usage of credentials for 10) which is not available in 13 and therefore it failed with 404. Let me know if this helps. Regards, Martin
I can observe the same behaviour after update of OSP13 (minor update). My keystonerc/overcloudrc seems to be updated and includes correct AUTH_URL. I was digging a little bit and the thing seems to be: The service list I can see for cinder is following: $ openstack service list | grep cinder | 61ac66eb9cc645f88951de5f95a8b6c3 | cinderv2 | volumev2 | | 65c0f8e0eb474235a671c754ee4c327f | cinderv3 | volumev3 | | 936ce3e6f7ed45efb84d22e73ace13f3 | cinderv3 | volume | | f77e8dd03fdc444d85b29e984cdb19f4 | cinder | volume | That means there is service named cinderv3 twice with different type of service (volume and volumev3). I have no idea this is allowed or not, but It does confuse tempestconf. Tempestcong tries to discover known services in config_tempest.services.services by looping through defined classes from config_tempest.services.volume.VolumeService. This way It find in my env services with name cinderv2 and cinderv3. Then tempestconf tries to find type of service for each of the service found by name (cinderv2 and cinderv3) from service list and for some reason It finds only service type "volume" for cinderv3 service and not volumev3 (not sure why, I have not digged into code more). And finally it tries to find endpoit for service cinderv3 based on its type which is "volume" and In service catalog endpoint for service with type volume is: https://10.0.10.100:13776/v1/4ebf8a028971474284c46cb4d3f7a4b (in my env) and not https://10.0.10.100:13776/v3/4ebf8a028971474284c46cb4d3f7a4b as is for service type "volumev3" $ openstack catalog list +------------+-----------------+------------------------------------------------------------------------------+ | Name | Type | Endpoints | +------------+-----------------+------------------------------------------------------------------------------+ | glance | image | regionOne | | | | internal: http://172.25.1.7:9292 | | | | regionOne | | | | admin: http://172.25.1.7:9292 | | | | regionOne | | | | public: https://10.0.10.100:13292 | | | | | | panko | event | regionOne | | | | public: https://10.0.10.100:13977 | | | | regionOne | | | | internal: http://172.25.1.7:8977 | | | | regionOne | | | | admin: http://172.25.1.7:8977 | | | | | | aodh | alarming | regionOne | | | | internal: http://172.25.1.7:8042 | | | | regionOne | | | | admin: http://172.25.1.7:8042 | | | | regionOne | | | | public: https://10.0.10.100:13042 | | | | | | keystone | identity | regionOne | | | | public: https://10.0.10.100:13000 | | | | regionOne | | | | admin: http://192.168.24.31:35357 | | | | regionOne | | | | internal: http://172.25.1.7:5000 | | | | | | swift | object-store | regionOne | | | | admin: http://172.23.1.7:8080 | | | | regionOne | | | | public: https://10.0.10.100:13808/v1/AUTH_4ebf8a028971474284c46cb4d3f7a4bf | | | | regionOne | | | | internal: http://172.23.1.7:8080/v1/AUTH_4ebf8a028971474284c46cb4d3f7a4bf | | | | | | cinderv2 | volumev2 | regionOne | | | | admin: http://172.25.1.7:8776/v2/4ebf8a028971474284c46cb4d3f7a4bf | | | | regionOne | | | | internal: http://172.25.1.7:8776/v2/4ebf8a028971474284c46cb4d3f7a4bf | | | | regionOne | | | | public: https://10.0.10.100:13776/v2/4ebf8a028971474284c46cb4d3f7a4bf | | | | | | cinderv3 | volumev3 | regionOne | | | | internal: http://172.25.1.7:8776/v3/4ebf8a028971474284c46cb4d3f7a4bf | | | | regionOne | | | | admin: http://172.25.1.7:8776/v3/4ebf8a028971474284c46cb4d3f7a4bf | | | | regionOne | | | | public: https://10.0.10.100:13776/v3/4ebf8a028971474284c46cb4d3f7a4bf | | | | | | nova | compute | regionOne | | | | admin: http://172.25.1.7:8774/v2.1 | | | | regionOne | | | | internal: http://172.25.1.7:8774/v2.1 | | | | regionOne | | | | public: https://10.0.10.100:13774/v2.1 | | | | | | cinderv3 | volume | regionOne | | | | public: https://10.0.10.100:13776/v3/4ebf8a028971474284c46cb4d3f7a4bf | | | | regionOne | | | | internal: http://172.25.1.7:8776/v3/4ebf8a028971474284c46cb4d3f7a4bf | | | | regionOne | | | | admin: http://172.25.1.7:8776/v3/4ebf8a028971474284c46cb4d3f7a4bf | | | | | | placement | placement | regionOne | | | | public: https://10.0.10.100:13778/placement | | | | regionOne | | | | admin: http://172.25.1.7:8778/placement | | | | regionOne | | | | internal: http://172.25.1.7:8778/placement | | | | | | neutron | network | regionOne | | | | admin: http://172.25.1.7:9696 | | | | regionOne | | | | internal: http://172.25.1.7:9696 | | | | regionOne | | | | public: https://10.0.10.100:13696 | | | | | | heat-cfn | cloudformation | regionOne | | | | public: https://10.0.10.100:13005/v1 | | | | regionOne | | | | admin: http://172.25.1.7:8000/v1 | | | | regionOne | | | | internal: http://172.25.1.7:8000/v1 | | | | | | sahara | data-processing | regionOne | | | | admin: http://172.25.1.7:8386/v1.1/4ebf8a028971474284c46cb4d3f7a4bf | | | | regionOne | | | | internal: http://172.25.1.7:8386/v1.1/4ebf8a028971474284c46cb4d3f7a4bf | | | | regionOne | | | | public: https://10.0.10.100:13386/v1.1/4ebf8a028971474284c46cb4d3f7a4bf | | | | | | ceilometer | metering | | | gnocchi | metric | regionOne | | | | public: https://10.0.10.100:13041 | | | | regionOne | | | | admin: http://172.25.1.7:8041 | | | | regionOne | | | | internal: http://172.25.1.7:8041 | | | | | | cinder | volume | regionOne | | | | admin: http://172.25.1.7:8776/v1/4ebf8a028971474284c46cb4d3f7a4bf | | | | regionOne | | | | internal: http://172.25.1.7:8776/v1/4ebf8a028971474284c46cb4d3f7a4bf | | | | regionOne | | | | public: https://10.0.10.100:13776/v1/4ebf8a028971474284c46cb4d3f7a4bf | | | | | | heat | orchestration | regionOne | | | | public: https://10.0.10.100:13004/v1/4ebf8a028971474284c46cb4d3f7a4bf | | | | regionOne | | | | internal: http://172.25.1.7:8004/v1/4ebf8a028971474284c46cb4d3f7a4bf | | | | regionOne | | | | admin: http://172.25.1.7:8004/v1/4ebf8a028971474284c46cb4d3f7a4bf | | | | | +------------+-----------------+------------------------------------------------------------------------------+ Tempestconf uses v1 endpoint for cinderv3 which does not work obviously. I do not know If one service can have multiple types, as well as how env got into such shape during upgrade. But tempestconf does not deal with such situation.
(In reply to Martin Kopec from comment #2) > Hi Archit, > > I think the job sources wrong credentials, from the output you shared I can > see "source ~/keystonerc" which means that discover-tempest-config uses > credentials for 10 and not for 13. Also we can see in the traceback that > discover-tempest-config contacted v1 endpoint (result of usage of > credentials for 10) which is not available in 13 and therefore it failed > with 404. > > Let me know if this helps. > > Regards, > Martin clearing the needinfo it seems to be answered by c#3
In python-tempestconf the services and their types are gathered from the catalog which is stored in a dictionary in a {service_name: type} format [1]. In the bug description we can see that we have 2 services with the same name but different type and because of the dict structure only the latest entry of cinderv3 in the catalog is stored in the dict which happens to be the cinderv3 with a volume type. Then python-tempestconf uses volume type to find out the endpoint (instead of volumev3 type) [2] and that's why a wrong endpoint is picked which leads to a 404. [1] https://opendev.org/openstack/python-tempestconf/src/branch/master/config_tempest/services/services.py#L73-L74 [2] https://opendev.org/openstack/python-tempestconf/src/branch/master/config_tempest/services/services.py#L89
*** Bug 1769721 has been marked as a duplicate of this bug. ***
The fix is part of python-tempestconf-2.4.0-1.el7ost package which is available in RHOS 13 repositories via latest symlink. The automated jobs which discovered the bug are not capable to verify the fix at the moment (are stuck on an unrelated issue). However, a scratch build containing the fix was shared more than 2 months ago and no one pointed out it doesn't work. Also my manual testing supports that. Marking as VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0769