Description of problem: When searching via REST-API with all-content=True the return is without all-content Version-Release number of selected component (if applicable): Only on rhevm-appliance rhevm-3.6.7.2-0.1.el6.noarch rhevm-restapi-3.6.7.2-0.1.el6.noarch Works on: rhevm-restapi-3.6.7.3-0.1.el6.noarch rhevm-3.6.7.3-0.1.el6.noarch Also works on all previous rhevm builds (not rhevm-appliance) Steps to Reproduce: 1. send /api/clusters?search=name <cluster_name> with All-content=True Actual results: All-content is not return (missing management_network href) Expected results: All-content returned (have management_network href)
I installed 3.6.7.2-0.1 and I can't reproduce the error, the management network is always returned no matter if I use search or not, or if I request all the clusters or just one. In the description you mention that it doesn't work with 3.6.7.2, but that it does work with 3.6.7.3. Is that accurate? It works with a newer version but not with an older one? How are you exactly sending the "All-Content" header? With the "curl" command? With the Python SDK? Can you try with an script like this and report the results? ---8<--- #!/bin/sh -ex url="https://rhevm36.example.com/api" user="admin@internal" password="mypassword" cluster="mycluster" curl \ --verbose \ --insecure \ --cacert /etc/pki/ovirt-engine/ca.pem \ --request GET \ --user "${user}:${password}" \ --header "Accept: application/xml" \ --header "All-Content: True" \ "${url}/clusters?search=name%3D${cluster}" --->8---
Sending curl -i -X GET \ -H "Authorization:Basic YWRtaW5AaW50ZXJuYWw6MTIzNDU2" \ -H "Content-Type:application/xml" \ -H "All-content:True" \ 'https://<engine-url>/ovirt-engine/api/clusters?search=golden_env_mixed_1' Getting <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <clusters> <cluster href="/ovirt-engine/api/clusters/c6e398c8-cddd-4d35-b02a-e0a94d5b76b4" id="c6e398c8-cddd-4d35-b02a-e0a94d5b76b4"> <actions> <link href="/ovirt-engine/api/clusters/c6e398c8-cddd-4d35-b02a-e0a94d5b76b4/resetemulatedmachine" rel="resetemulatedmachine"/> </actions> <name>golden_env_mixed_1</name> <description></description> <link href="/ovirt-engine/api/clusters/c6e398c8-cddd-4d35-b02a-e0a94d5b76b4/networks" rel="networks"/> <link href="/ovirt-engine/api/clusters/c6e398c8-cddd-4d35-b02a-e0a94d5b76b4/permissions" rel="permissions"/> <link href="/ovirt-engine/api/clusters/c6e398c8-cddd-4d35-b02a-e0a94d5b76b4/affinitygroups" rel="affinitygroups"/> <link href="/ovirt-engine/api/clusters/c6e398c8-cddd-4d35-b02a-e0a94d5b76b4/cpuprofiles" rel="cpuprofiles"/> <cpu id="Intel Conroe Family"> <architecture>X86_64</architecture> </cpu> <data_center href="/ovirt-engine/api/datacenters/fa012945-967f-443b-9fa6-0ad8fe368694" id="fa012945-967f-443b-9fa6-0ad8fe368694"/> <memory_policy> <overcommit percent="200"/> <transparent_hugepages> <enabled>true</enabled> </transparent_hugepages> </memory_policy> <scheduling_policy href="/ovirt-engine/api/schedulingpolicies/5a2b0939-7d46-4b73-a469-e9c2c7fc6a53" id="5a2b0939-7d46-4b73-a469-e9c2c7fc6a53"> <name>power_saving</name> <policy>power_saving</policy> <thresholds low="21" high="61" duration="240"/> <properties> <property> <name>HighUtilization</name> <value>61</value> </property> <property> <name>CpuOverCommitDurationMinutes</name> <value>4</value> </property> <property> <name>LowUtilization</name> <value>21</value> </property> </properties> </scheduling_policy> <version major="3" minor="6"/> <error_handling> <on_error>migrate</on_error> </error_handling> <virt_service>true</virt_service> <gluster_service>false</gluster_service> <threads_as_cores>false</threads_as_cores> <tunnel_migration>false</tunnel_migration> <trusted_service>false</trusted_service> <ha_reservation>false</ha_reservation> <optional_reason>false</optional_reason> <maintenance_reason_required>false</maintenance_reason_required> <ballooning_enabled>false</ballooning_enabled> <ksm> <enabled>false</enabled> <merge_across_nodes>true</merge_across_nodes> </ksm> <required_rng_sources/> <fencing_policy> <enabled>true</enabled> <skip_if_sd_active> <enabled>false</enabled> </skip_if_sd_active> <skip_if_connectivity_broken> <enabled>false</enabled> <threshold>50</threshold> </skip_if_connectivity_broken> </fencing_policy> <migration> <auto_converge>inherit</auto_converge> <compressed>inherit</compressed> </migration> </cluster> </clusters> management_network is missing. 3.6.7.3 is non rhevm-appliance engine and 3.6.7.2 is rhevm-appliance engine. ping me if you need the env
I think that the difference between those two environments is the value of the "ApplicationMode". In the environment that doesn't work correctly the value is "VirtOnly", in the one that works it is probably "AllModes". The root cause of the bug is that when the mode is "VirtOnly" we ignore the "All-Content" header. I'm lowering the severity because the problem can be avoided manually changing that configuration in the database: engine=# update vdc_options set option_value = 255 where option_name = 'ApplicationMode';
verified in rhevm-3.6.8-0.1.el6.noarch engine=# select * from vdc_options where option_name = 'ApplicationMode'; option_id | option_name | option_value | version -----------+-----------------+--------------+--------- 18 | ApplicationMode | 1 | general (1 row) curl -k -X GET -H "Accept: application/xml" -H "Content-Type: application/xml" -H "All-Content: True" -u admin@internal:123456 --cacert ca.crt https://${engine}:443/ovirt-engine/api/clusters?search=name%3DDefault | grep management % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3047 100 3047 0 0 7261 0 --:--:-- --:--:-- --:--:-- 7272 <management_network href="/ovirt-engine/api/clusters/00000002-0002-0002-0002-0000000003a2/networks/00000000-0000-0000-0000-000000000009" id="00000000-0000-0000-0000-000000000009"/>
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1507.html