| Summary: | openshift-iptables-port-proxy service status failed after upgrade node from 1.2 to 2.0 | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Johnny Liu <jialiu> |
| Component: | Cluster Version Operator | Assignee: | Jason DeTiberus <jdetiber> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | libra bugs <libra-bugs> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 2.0.0 | CC: | bleanhar, jdetiber, jialiu, libra-onpremise-devel |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2014-02-04 14:45:50 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
This is fixed in the latest upgrade puddle: http://etherpad.corp.redhat.com/ose-2-0-upgrade-2013-12-02 Please note that this pad has changed the upgrade steps to use a puddle, so there are a few workarounds that need to be done to make sure the right bits are installed during the upgrade. Verified this bug with http://etherpad.corp.redhat.com/ose-2-0-upgrade-2013-12-02 , and PASS. Following http://etherpad.corp.redhat.com/ose-2-0-upgrade-2013-12-05, this bug come back, so re-assign it. I'm not able to replicate this bug with the 2013-12-05 puddle. The latest upgrader stops the openshift-iptables-port-proxy after doing the rules migration, and then does a service openshift-iptables-port-proxy start at the end_maintenance_mode step. The duplicate rules were from doing a service start with the rules already in place. Can you attach the output of the upgrade.log from the node? The output of rpm -qa | grep openshift would help as well. origin-server PR: https://github.com/openshift/origin-server/pull/4303 Commit pushed to master at https://github.com/openshift/origin-server https://github.com/openshift/origin-server/commit/3f25793936fbd5272cc30eae1256bd4c98c3ba32 Bug 1035176 - openshift-iptables-port-proxy start causes duplicate rules Updated oo-admin-ctl-iptables-port-proxy start to flush rules prior to applying them. enterprise-server PR: https://github.com/openshift/enterprise-server/pull/167 The latest pad (http://etherpad.corp.redhat.com/ose-2-0-upgrade-2013-12-11): includes an updated openshift-iptables-port-proxy that flushes the chain prior when doing a start/reload to prevent the duplicate rules. Verified his bug following http://etherpad.corp.redhat.com/ose-2-0-upgrade-2013-12-11, and PASS. |
Description of problem: ose-upgrade will add duplicated iptable rules for scalable app, that would make "service openshift-iptables-port-proxy status" return failure. Version-Release number of selected component (if applicable): openshift-enterprise-upgrade-node-2.0.0b-1.git.16.a0bff65.el6op.noarch openshift-enterprise-release-2.0.0b-1.git.16.a0bff65.el6op.noarch How reproducible: Always Steps to Reproduce: 1.Setup ose-1.2 env 2.Create a scalable jbosseap app and embed mysql to this app 3.Then upgrade env to 2.0. 4.Run "oo-diagnotics" or "service openshift-iptables-port-proxy status" Actual results: # oo-diagnotics -v <--snip--> INFO: running: test_services_enabled INFO: checking that required services are running now FAIL: test_services_enabled The following service(s) are not currently started: openshift-iptables-port-proxy These services are required for OpenShift functionality. <--snip--> # service openshift-iptables-port-proxy status WARNING: A difference has been detected between state of /etc/openshift/iptables.filter.rules and the rhc-app-comm iptables chain. Seen from update.log, found ose-upgrade added iptable rules twice. INFO: running /usr/lib/ruby/site_ruby/1.8/ose-upgrade/node/upgrades/2/conf/09-node-openshift-iptables-port-proxy-conf INFO: /usr/lib/ruby/site_ruby/1.8/ose-upgrade/node/upgrades/2/conf/09-node-openshift-iptables-port-proxy-conf ran without error: --BEGIN OUTPUT-- <--snp--> + migrate_all + sed -r -n -e 's/^listen ([0-9]+):([0-9.]+):([0-9]+)/\1 \2 \3/ p' /etc/openshift/port-proxy.cfg.rpmsave + read port daddr dport + echo 38036 127.1.244.129:8080 38036 127.1.244.129:8080 + oo-iptables-port-proxy addproxy 38036 127.1.244.129:8080 -I rhc-app-comm 1 -d 127.1.244.129 -p tcp --dport 8080 -j ACCEPT -m comment --comment 38036 -I rhc-app-comm 1 -d 127.1.244.129 -m conntrack --ctstate NEW -m tcp -p tcp --dport 8080 -j ACCEPT -m comment --comment 38036 -A OUTPUT -d 192.168.59.198/32 -m tcp -p tcp --dport 38036 -j DNAT --to-destination 127.1.244.129:8080 -A PREROUTING -d 192.168.59.198/32 -m tcp -p tcp --dport 38036 -j DNAT --to-destination 127.1.244.129:8080 + read port daddr dport + echo 38037 127.1.244.129:7600 38037 127.1.244.129:7600 + oo-iptables-port-proxy addproxy 38037 127.1.244.129:7600 -I rhc-app-comm 1 -d 127.1.244.129 -p tcp --dport 7600 -j ACCEPT -m comment --comment 38037 -I rhc-app-comm 1 -d 127.1.244.129 -m conntrack --ctstate NEW -m tcp -p tcp --dport 7600 -j ACCEPT -m comment --comment 38037 -A OUTPUT -d 192.168.59.198/32 -m tcp -p tcp --dport 38037 -j DNAT --to-destination 127.1.244.129:7600 -A PREROUTING -d 192.168.59.198/32 -m tcp -p tcp --dport 38037 -j DNAT --to-destination 127.1.244.129:7600 The "service openshift-iptables-port-proxy start" will add duplicated iptable rules. INFO: /usr/lib/ruby/site_ruby/1.8/ose-upgrade/node/upgrades/2/end_maintenance_mode/04-node-start-openshift-iptables-port-proxy ran without error: --BEGIN OUTPUT-- + service openshift-iptables-port-proxy start --END /usr/lib/ruby/site_ruby/1.8/ose-upgrade/node/upgrades/2/end_maintenance_mode/04-node-start-openshift-iptables-port-proxy OUTPUT-- Expected results: Should not add iptable rules for scalable app twice. Additional info: After reboot node, "service openshift-iptables-port-proxy status" return success.