Version-Release number of selected component (if applicable): openshift v3.3.0.19 kubernetes v1.3.0+507d3a7 etcd 2.3.0+git How reproducible: Always Steps to Reproduce: 1. Create an egressnetworkpolicy with correct CIDRSelector # oc get egressnetworkpolicy -o yaml apiVersion: v1 items: - apiVersion: v1 kind: EgressNetworkPolicy metadata: creationTimestamp: 2016-08-16T03:07:11Z name: default namespace: xiaocwan-t resourceVersion: "35332" selfLink: /oapi/v1/namespaces/xiaocwan-t/egressnetworkpolicies/default uid: 86802cb7-635e-11e6-8bce-0efe35a55201 spec: egress: - to: cidrSelector: 10.66.140.0/24 type: Allow - to: cidrSelector: 10.0.0.0/8 type: Deny kind: List metadata: {} 2. Edit CIDRSelector to invalid value by oc edit, eg: a.b.c.d/16 # oc edit egressnetworkpolicy default egressnetworkpolicy "default" edited 3. Check the egressnetworkpolicy again Actual results: 2. Could edit successfully without any warning message # oc edit egressnetworkpolicy default egressnetworkpolicy "default" edited 3. # oc get egressnetworkpolicy -o yaml apiVersion: v1 items: - apiVersion: v1 kind: EgressNetworkPolicy metadata: creationTimestamp: 2016-08-16T03:07:11Z name: default namespace: xiaocwan-t resourceVersion: "35403" selfLink: /oapi/v1/namespaces/xiaocwan-t/egressnetworkpolicies/default uid: 86802cb7-635e-11e6-8bce-0efe35a55201 spec: egress: - to: cidrSelector: a.b.c.d/16 type: Allow - to: cidrSelector: 10.0.0.0/8 type: Deny kind: List metadata: {} Expected results: 2. User could get some warning message when edit cidrSelector to invalid value Additional info: When we create egressnetworkpolicy with invalid value, we could get such warning: # oc create -f e.json The EgressNetworkPolicy "default" is invalid. spec.egress[1].to: Invalid value: "a.b.c.d/32": invalid CIDR address: a.b.c.d/32
FYI note that this was true of ClusterNetwork, HostSubnet, and NetNamespace as well. (All fixed in the linked PR.)
fixed in git
Test on latest origin env, bug have been fixed. We could get the error message "# * spec.egress[1].to: Invalid value: "sd.d.d.a/24": invalid CIDR address: sd.d.d.a/24 when set CIDR to invalid value. oc v1.3.0-alpha.3+bca49e5 kubernetes v1.3.0+507d3a7
This has been merged into ose and is in OSE v3.3.0.23 or newer.
Test on OSE and bug have been fixed. oc v3.3.0.23-dirty kubernetes v1.3.0+507d3a7
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1933