Bug 1142916 - Impossible to get schedulerpolicyunit by id via REST
Summary: Impossible to get schedulerpolicyunit by id via REST
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine-restapi
Version: 3.5.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 3.5.0
Assignee: Gilad Chaplik
QA Contact: Artyom
URL:
Whiteboard: sla
Depends On:
Blocks: rhev3.5beta3
TreeView+ depends on / blocked
 
Reported: 2014-09-17 15:23 UTC by Artyom
Modified: 2016-02-10 20:19 UTC (History)
13 users (show)

Fixed In Version: org.ovirt.engine-root-3.5.0-14
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-02-17 17:17:00 UTC
oVirt Team: SLA
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 33435 0 master MERGED engine: fix api/schedulingpolicyunits/policy_unit_id Never
oVirt gerrit 33457 0 ovirt-engine-3.5 MERGED engine: fix api/schedulingpolicyunits/policy_unit_id Never

Description Artyom 2014-09-17 15:23:43 UTC
Description of problem:
Impossible to get schedulerpolicyunit by id via REST

Version-Release number of selected component (if applicable):
rhevm-3.5.0-0.12.beta.el6ev.noarch

How reproducible:
Always

Steps to Reproduce:
1. go to /ovirt-engine/api/schedulingpolicyunits/some_unit_policy_id
2. 
3.

Actual results:
<fault>
<reason>Operation Failed</reason>
<detail>User is not logged in.</detail>
</fault>

Expected results:
Operation success and I can see just information about some_unit_policy_id

Additional info:

Comment 1 Juan Hernández 2014-09-17 15:34:16 UTC
I can't reproduce this. Can you give more details of how are you sending the request? Using the curl command? Can you run it with the --verbose option and report the results?

Note that the results will include your password in the "Authorization" header, be careful to not expose it.

Comment 2 Artyom 2014-09-18 09:07:10 UTC
Request for scheduler policy unit:
curl -X GET -H "Accept: application/xml" -u admin@internal:1 -k https://10.35.163.98:443/ovirt-engine/api/schedulingpolicyunits/d58c8e32-44e1-418f-9222-52cd887bf9e0 -v
* About to connect() to 10.35.163.98 port 443 (#0)
*   Trying 10.35.163.98...
* Connected to 10.35.163.98 (10.35.163.98) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* SSL connection using TLS_DHE_RSA_WITH_AES_128_CBC_SHA
* Server certificate:
*       subject: CN=dhcp163-98.scl.lab.tlv.redhat.com,O=scl.lab.tlv.redhat.com,C=US
*       start date: Sep 13 10:34:21 2014 GMT
*       expire date: Aug 19 10:34:21 2019 GMT
*       common name: dhcp163-98.scl.lab.tlv.redhat.com
*       issuer: CN=dhcp163-98.scl.lab.tlv.redhat.com.62874,O=scl.lab.tlv.redhat.com,C=US
* Server auth using Basic with user 'admin@internal'
> GET /ovirt-engine/api/schedulingpolicyunits/d58c8e32-44e1-418f-9222-52cd887bf9e0 HTTP/1.1
> Authorization: Basic YWRtaW5AaW50ZXJuYWw6MQ==
> User-Agent: curl/7.29.0
> Host: 10.35.163.98
> Accept: application/xml
> 
< HTTP/1.1 401 Unauthorized
< Date: Thu, 18 Sep 2014 08:56:01 GMT
< Content-Type: application/xml
< Content-Length: 142
< Vary: Accept-Encoding
< Connection: close
< 
* Closing connection 0
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><fault><reason>Operation Failed</reason><detail>User is not logged in.</detail></fault>


Work request:
curl -X GET -H "Accept: application/xml" -u admin@internal:1 -k https://10.35.163.98:443/ovirt-engine/api/schedulingpolicyunits -v* About to connect() to 10.35.163.98 port 443 (#0)
*   Trying 10.35.163.98...
* Connected to 10.35.163.98 (10.35.163.98) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* SSL connection using TLS_DHE_RSA_WITH_AES_128_CBC_SHA
* Server certificate:
*       subject: CN=dhcp163-98.scl.lab.tlv.redhat.com,O=scl.lab.tlv.redhat.com,C=US
*       start date: Sep 13 10:34:21 2014 GMT
*       expire date: Aug 19 10:34:21 2019 GMT
*       common name: dhcp163-98.scl.lab.tlv.redhat.com
*       issuer: CN=dhcp163-98.scl.lab.tlv.redhat.com.62874,O=scl.lab.tlv.redhat.com,C=US
* Server auth using Basic with user 'admin@internal'
> GET /ovirt-engine/api/schedulingpolicyunits HTTP/1.1
> Authorization: Basic YWRtaW5AaW50ZXJuYWw6MQ==
> User-Agent: curl/7.29.0
> Host: 10.35.163.98
> Accept: application/xml
> 
< HTTP/1.1 200 OK
< Date: Thu, 18 Sep 2014 08:59:17 GMT
< Pragma: No-cache
< Cache-Control: no-cache
< Expires: Thu, 01 Jan 1970 02:00:00 IST
< Content-Type: application/xml
< Vary: Accept-Encoding
< Connection: close
< Transfer-Encoding: chunked
< 
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<scheduling_policy_units>
    <scheduling_policy_unit type="load_balancing" href="/ovirt-engine/api/schedulingpolicyunits/d58c8e32-44e1-418f-9222-52cd887bf9e0" id="d58c8e32-44e1-418f-9222-52cd887bf9e0">
        <name>OptimalForEvenGuestDistribution</name>
        <description>Even VM count distribution policy</description>
        <internal>true</internal>
        <enabled>true</enabled>
        <properties>
            <property>
                <name>HighVmCount</name>
                <value>^([0-9]|[1-9][0-9]+)$</value>
            </property>
            <property>
                <name>MigrationThreshold</name>
                <value>^([2-9]|[1-9][0-9]+)$</value>
            </property>
            <property>
                <name>SpmVmGrace</name>
                <value>^([0-9]|[1-9][0-9]+)$</value>
            </property>
        </properties>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="filter" href="/ovirt-engine/api/schedulingpolicyunits/12262ab6-9690-4bc3-a2b3-35573b172d54" id="12262ab6-9690-4bc3-a2b3-35573b172d54">
        <name>PinToHost</name>
        <description>Filters out all hosts that VM is not pinned to</description>
        <internal>true</internal>
        <enabled>true</enabled>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="filter" href="/ovirt-engine/api/schedulingpolicyunits/438b052c-90ab-40e8-9be0-a22560202ea6" id="438b052c-90ab-40e8-9be0-a22560202ea6">
        <name>CPU-Level</name>
        <description>Runs VMs only on hosts with a proper CPU level</description>
        <internal>true</internal>
        <enabled>true</enabled>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="load_balancing" href="/ovirt-engine/api/schedulingpolicyunits/736999d0-1023-46a4-9a75-1316ed50e151" id="736999d0-1023-46a4-9a75-1316ed50e151">
        <name>OptimalForPowerSaving</name>
        <description>Load balancing VMs in cluster according to hosts CPU load, striving cluster's hosts CPU load to be over 'LowUtilization' and under 'HighUtilization'</description>
        <internal>true</internal>
        <enabled>true</enabled>
        <properties>
            <property>
                <name>CpuOverCommitDurationMinutes</name>
                <value>^([1-9][0-9]*)$</value>
            </property>
            <property>
                <name>HighUtilization</name>
                <value>^([5-9][0-9])$</value>
            </property>
            <property>
                <name>LowUtilization</name>
                <value>^([0-9]|[1-4][0-9])$</value>
            </property>
            <property>
                <name>HostsInReserve</name>
                <value>^[0-9][0-9]*$</value>
            </property>
            <property>
                <name>EnableAutomaticHostPowerManagement</name>
                <value>^(true|false)$</value>
            </property>
        </properties>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="filter" href="/ovirt-engine/api/schedulingpolicyunits/c9ddbb34-0e1d-4061-a8d7-b0893fa80932" id="c9ddbb34-0e1d-4061-a8d7-b0893fa80932">
        <name>Memory</name>
        <description>Filters out hosts that have insufficient memory to run the VM</description>
        <internal>true</internal>
        <enabled>true</enabled>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="weight" href="/ovirt-engine/api/schedulingpolicyunits/98e92667-6161-41fb-b3fa-34f820ccbc4b" id="98e92667-6161-41fb-b3fa-34f820ccbc4b">
        <name>HA</name>
        <description>Weights hosts according to their HA score</description>
        <internal>true</internal>
        <enabled>true</enabled>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="weight" href="/ovirt-engine/api/schedulingpolicyunits/7f262d70-6cac-11e3-981f-0800200c9a66" id="7f262d70-6cac-11e3-981f-0800200c9a66">
        <name>OptimalForHaReservation</name>
        <description>Weights hosts according to their HA score regardless of hosted engine</description>
        <internal>true</internal>
        <enabled>true</enabled>
        <properties>
            <property>
                <name>ScaleDown</name>
                <value>(100|[1-9]|[1-9][0-9])$</value>
            </property>
        </properties>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="filter" href="/ovirt-engine/api/schedulingpolicyunits/3fa923ac-f490-422e-9f2f-50716838d24b" id="3fa923ac-f490-422e-9f2f-50716838d24b">
        <name>max_vms</name>
        <description></description>
        <internal>false</internal>
        <enabled>true</enabled>
        <properties>
            <property>
                <name>maximum_vm_count</name>
                <value>[0-9]*</value>
            </property>
        </properties>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="weight" href="/ovirt-engine/api/schedulingpolicyunits/38440000-8cf0-14bd-c43e-10b96e4ef00b" id="38440000-8cf0-14bd-c43e-10b96e4ef00b">
        <name>None</name>
        <description>Follows Even Distribution weight module</description>
        <internal>true</internal>
        <enabled>true</enabled>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="weight" href="/ovirt-engine/api/schedulingpolicyunits/3ba8c988-f779-42c0-90ce-caa8243edee7" id="3ba8c988-f779-42c0-90ce-caa8243edee7">
        <name>OptimalForEvenGuestDistribution</name>
        <description>Weights host according the number of running VMs</description>
        <internal>true</internal>
        <enabled>true</enabled>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="filter" href="/ovirt-engine/api/schedulingpolicyunits/6d636bf6-a35c-4f9d-b68d-0731f720cddc" id="6d636bf6-a35c-4f9d-b68d-0731f720cddc">
        <name>CPU</name>
        <description>Filters out hosts with less CPUs than VM's CPUs</description>
        <internal>true</internal>
        <enabled>true</enabled>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="weight" href="/ovirt-engine/api/schedulingpolicyunits/84e6ddee-ab0d-42dd-82f0-c297779db567" id="84e6ddee-ab0d-42dd-82f0-c297779db567">
        <name>VmAffinityGroups</name>
        <description>Enables Affinity Groups soft enforcement for VMs; VMs in group are most likely to run either on the same hypervisor host (positive) or on independent hypervisor hosts (negative)</description>
        <internal>true</internal>
        <enabled>true</enabled>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="filter" href="/ovirt-engine/api/schedulingpolicyunits/84e6ddee-ab0d-42dd-82f0-c297779db566" id="84e6ddee-ab0d-42dd-82f0-c297779db566">
        <name>VmAffinityGroups</name>
        <description>Enables Affinity Groups hard enforcement for VMs; VMs in group are required to run either on the same hypervisor host (positive) or on independent hypervisor hosts (negative)</description>
        <internal>true</internal>
        <enabled>true</enabled>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="weight" href="/ovirt-engine/api/schedulingpolicyunits/736999d0-1023-46a4-9a75-1316ed50e15b" id="736999d0-1023-46a4-9a75-1316ed50e15b">
        <name>OptimalForPowerSaving</name>
        <description>Gives hosts with higher CPU usage, lower weight (means that hosts with higher CPU usage are more likely to be selected)</description>
        <internal>true</internal>
        <enabled>true</enabled>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="filter" href="/ovirt-engine/api/schedulingpolicyunits/e659c871-0bf1-4ccc-b748-f28f5d08dffd" id="e659c871-0bf1-4ccc-b748-f28f5d08dffd">
        <name>HA</name>
        <description>Runs the hosted engine VM only on hosts with a positive score</description>
        <internal>true</internal>
        <enabled>true</enabled>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="load_balancing" href="/ovirt-engine/api/schedulingpolicyunits/7db4ab05-81ab-42e8-868a-aee2df483ed2" id="7db4ab05-81ab-42e8-868a-aee2df483ed2">
        <name>OptimalForEvenDistribution</name>
        <description>Load balancing VMs in cluster according to hosts CPU load, striving cluster's hosts CPU load to be under 'HighUtilization'</description>
        <internal>true</internal>
        <enabled>true</enabled>
        <properties>
            <property>
                <name>CpuOverCommitDurationMinutes</name>
                <value>^([1-9][0-9]*)$</value>
            </property>
            <property>
                <name>HighUtilization</name>
                <value>^([5-9][0-9])$</value>
            </property>
        </properties>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="weight" href="/ovirt-engine/api/schedulingpolicyunits/7db4ab05-81ab-42e8-868a-aee2df483edb" id="7db4ab05-81ab-42e8-868a-aee2df483edb">
        <name>OptimalForEvenDistribution</name>
        <description>Gives hosts with lower CPU usage, lower weight (means that hosts with lower CPU usage are more likely to be selected)</description>
        <internal>true</internal>
        <enabled>true</enabled>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="filter" href="/ovirt-engine/api/schedulingpolicyunits/72163d1c-9468-4480-99d9-0888664eb143" id="72163d1c-9468-4480-99d9-0888664eb143">
        <name>Network</name>
        <description>Filters out hosts that are missing networks required by VM NICs, or missing cluster's display network</description>
        <internal>true</internal>
        <enabled>true</enabled>
    </scheduling_policy_unit>
    <scheduling_policy_unit type="load_balancing" href="/ovirt-engine/api/schedulingpolicyunits/38440000-8cf0-14bd-c43e-10b96e4ef00a" id="38440000-8cf0-14bd-c43e-10b96e4ef00a">
        <name>None</name>
        <description>No load balancing operation</description>
        <internal>true</internal>
        <enabled>true</enabled>
    </scheduling_policy_unit>
</scheduling_policy_units>
* Closing connection 0

Comment 3 Juan Hernández 2014-09-18 13:38:39 UTC
This happens because when the request for the policy unit is received Resteasy will call the "" method of the "BackendSchedulingPolicyUnitsResource" class to find the resource that will handle the request. This method is implemented as follows:

    @Override
    @SingleEntityResource
    public SchedulingPolicyUnitResource getSchedulingPolicyUnitSubResource(@PathParam("id") String id) {
        return inject(new BackendSchedulingPolicyUnitResource(id, getPolicyUnit(id)));
    }

The relevant thing here is that this method is calling "getPolicyUnit" which in turn will call "getCollection", which will call the backend to retrieve the policy units.

In order to call the backend we need to include the backend session identifier, which is stored in a thread local managed by the "SessionHelper" class and populated by the "SessionProcessor" class. This "SessionProcessor" class is implemented as a Resteasy interceptor with "SECURITY" precedence. Unfortunately Resteasy doesn't call this interceptor before the actual resource is invoked, so in this case it hasn't been called yet, thus there is no session identifier available, thus from the point of view of the backend the user isn't logged in.

To avoid this problem the "getSchedulingPolicyUnitSubResource" method should be changed so that it passes to the "BackendSchedulingPolicyUnitResource" constructor only the identifier of the policy unit, not the instance. The "get" method of "BackendSchedulingPolicyUnitResource" should then use the id to call the backend and get the instance.

Comment 4 Eyal Edri 2014-10-07 07:13:17 UTC
this bug status was moved to MODIFIED before engine vt5 was built,
hence moving to on_qa, if this was mistake and the fix isn't in,
please contact rhev-integ

Comment 5 Artyom 2014-10-07 10:50:21 UTC
Verified on vt5

Comment 6 Eyal Edri 2015-02-17 17:17:00 UTC
rhev 3.5.0 was released. closing.


Note You need to log in before you can comment on or make changes to this bug.