Bug 1867538 - Move vifs from spec to status field on KuryrPort CRD
Summary: Move vifs from spec to status field on KuryrPort CRD
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.6.0
Assignee: rdobosz
QA Contact: GenadiC
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-08-10 10:07 UTC by Maysa Macedo
Modified: 2020-10-27 16:26 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 16:26:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-network-operator pull 759 0 None closed Bug 1867538: Kuryr: Update KuryrPort CRD to move vifs to status 2020-09-03 07:51:43 UTC
Github openshift kuryr-kubernetes pull 324 0 None closed Bug 1867538: Move vifs to 'status' in the KuryrPort CRD. 2020-09-03 07:51:43 UTC
OpenStack gerrit 743955 0 None MERGED Move vifs to 'status' in the KuryrPort CRD. 2020-09-03 07:51:42 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:26:48 UTC

Description Maysa Macedo 2020-08-10 10:07:12 UTC
Description of problem:

The newly added KuryrPort CRD is holding the VIFs information under the specification field when it should be handled on the status as this field is meant to describe the current state of the object.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 rdobosz 2020-08-20 07:44:24 UTC
Before this change, you can observe, that all the information about vifs are on the KuryrPort CRD under 'spec' key.

Steps to reproduce:
1. perform cluter install
2. execute "kubectl --image kuryr/demo demo"
3. wait till "kubectl get pod" got the running state
4. "kubectl get kp <kp-name> -o json| jq -C '.status'" should return null, while "kubectl get kp <kp-name> -o json| jq -C '.spec.vifs'" should return several information regarding vifs

After change was applied:
1. do point 1-3 from previously described steps
2. "kubectl get kp <kp-name> -o json| jq -C '.spec'" should return null
3. "kubectl get kp <kp-name> -o json| jq -C '.status.vifs'" should return information about vifs.

Comment 4 rdobosz 2020-08-21 06:05:13 UTC
> 2. "kubectl get kp <kp-name> -o json| jq -C '.spec'" should return null

Should be:

"kubectl get kp <kp-name> -o json| jq -C '.spec.vifs'" should return null

Comment 5 rlobillo 2020-09-04 08:48:27 UTC
Verified on 4.6.0-0.nightly-2020-09-03-063148 over RHOS-16.1-RHEL-8-20200831.n.1.

[stack@undercloud-0 ~]$ kubectl get kp -o json  | jq '.items[0].spec'
{
  "podNodeName": "ostest-6dh4x-worker-0-bdrmv",
  "podUid": "2ca062d2-71d6-4f48-88b5-7425d070ab68"
}
[stack@undercloud-0 ~]$ kubectl get kp -o json  | jq '.items[0].spec.vifs'
null
[stack@undercloud-0 ~]$ kubectl get kp -o json  | jq '.items[0].status.vifs'
{
  "eth0": {
    "default": true,
    "vif": {
      "versioned_object.data": {
        "active": true,
        "address": "fa:16:3e:ac:33:20",
        "has_traffic_filtering": false,
        "id": "08b09e45-1ae9-423f-b33b-dde3b18648f7",
        "network": {
          "versioned_object.data": {
            "id": "1462363a-6204-4753-91e1-ead5af5ac351",
            "label": "ns/test-net",
            "mtu": 1442,
            "multi_host": false,
            "should_provide_bridge": false,
            "should_provide_vlan": false,
            "subnets": {
              "versioned_object.data": {
                "objects": [
                  {
                    "versioned_object.data": {
                      "cidr": "10.128.116.0/23",
                      "dns": [],
                      "gateway": "10.128.116.1",
                      "ips": {
                        "versioned_object.data": {
                          "objects": [
                            {
                              "versioned_object.data": {
                                "address": "10.128.116.141"
                              },
                              "versioned_object.name": "FixedIP",
                              "versioned_object.namespace": "os_vif",
                              "versioned_object.version": "1.0"
                            }
                          ]
                        },
                        "versioned_object.name": "FixedIPList",
                        "versioned_object.namespace": "os_vif",
                        "versioned_object.version": "1.0"
                      },
                      "routes": {
                        "versioned_object.data": {
                          "objects": []
                        },
                        "versioned_object.name": "RouteList",
                        "versioned_object.namespace": "os_vif",
                        "versioned_object.version": "1.0"
                      }
                    },
                    "versioned_object.name": "Subnet",
                    "versioned_object.namespace": "os_vif",
                    "versioned_object.version": "1.0"
                  }
                ]
              },
              "versioned_object.name": "SubnetList",
              "versioned_object.namespace": "os_vif",
              "versioned_object.version": "1.0"
            }
          },
          "versioned_object.name": "Network",
          "versioned_object.namespace": "os_vif",
          "versioned_object.version": "1.1"
        },
        "plugin": "noop",
        "preserve_on_delete": false,
        "vif_name": "tap08b09e45-1a",
        "vlan_id": 2336
      },
      "versioned_object.name": "VIFVlanNested",
      "versioned_object.namespace": "os_vif",
      "versioned_object.version": "1.0"
    }
  }
}

Comment 7 errata-xmlrpc 2020-10-27 16:26:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.