Bug 1028919

Summary: Unexpected message is shown when removing "cron-1.4" cartridge from a scalable application using admin_tools
Product: OpenShift Online Reporter: Qiushui Zhang <qiuzhang>
Component: PodAssignee: Ravi Sankar <rpenta>
Status: CLOSED WONTFIX QA Contact: libra bugs <libra-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 2.xCC: cpelland, dmcphers, lxia, xtian
Target Milestone: ---Keywords: UpcomingRelease
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-06-11 21:44:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Qiushui Zhang 2013-11-11 08:39:45 UTC
Description of problem:
When removing cartridge "cron-1.4" from a scalable application using admin_tools, unexpected message is shown, saying "No request sent, we did not discover any nodes."

Version-Release number of selected component (if applicable):
devenv_4016

How reproducible:
always

Steps to Reproduce:
1. Create a scalable application.
rhc app create ews1 jbossews-1.0 cron-1.4 -s
2. Scale up the application
rhc cartridge scale jbossews-1.0 -a ews1 --min 2
3. On instance, remove cartridge "cron-1.4"

Actual results:
The cartridge "cron-1.4" is removed, but unexpected message is shown.
[root@ip-10-100-251-123 ~]# oo-admin-ctl-app -c remove-cartridge -l qiuzhang --app ews1 --cartridge cron-1.4

No request sent, we did not discover any nodes.Success


Expected results:
Cartridge is removed without this kind of message.

Additional info:
If adding/removing a database cartridge, no such kind of message is seen:
[root@ip-10-100-251-123 ~]# oo-admin-ctl-app -c remove-cartridge -l qiuzhang --app ews1 --cartridge mysql-5.1
Success

Comment 1 Liang Xia 2013-11-13 10:16:12 UTC
Also met this on devenv_4027 when destroy an scalable app while the gear dir does not exist.

# rm -rf /var/lib/openshift/528341ba51b0a0d7f1000141/

# oo-admin-ctl-app -c destroy -l lxia -a py33s
    !!!! WARNING !!!! WARNING !!!! WARNING !!!!
    You are about to delete the py33s application.
  
    This is NOT reversible, all remote data for this application will be removed.
Do you want to delete this application (y/n): yes

No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.Successfully deleted application: py33s

Comment 2 Liang Xia 2013-12-19 11:04:41 UTC
Reproduced on devenv_4154 with following steps.

1. set up multi-node environment ( 1 district with 2 nodes at least ).
2. create some non-scalable apps on node1 and node2 ( just create the apps, then use oo-admin-move to move the gears ).
3. create some scalable apps and scale the app up ( use oo-admin-move to make at least one app has gears on both node1 and node 2 ).
4. stop mcollective on node2.
5. run 'oo-admin-repair --removed-nodes'

# oo-admin-repair --removed-nodes
Started at: 2013-12-19 05:27:21 -0500
Time to fetch mongo data: 0.084s
Total gears found in mongo: 33
Servers that are unresponsive:
        Server: domU-12-31-39-13-D1-D3 (district: dist1), Confirm [yes/no]:yes
Check failed.
Some servers are unresponsive: domU-12-31-39-13-D1-D3
Do you want to delete unresponsive servers from their respective districts [yes/no]: no
Found 2 unresponsive unscalable apps:
js10 (id: 52b2ba3c655f6ec693000244)
php (id: 52b29468655f6e627f0000ed)
These apps can not be recovered. Do you want to delete all of them [yes/no]: no
Found 1 unresponsive scalable apps that can not be recovered.
as7s (id: 52b2ae2d655f6ec6930000de)
Do you want to delete all of them [yes/no]: no
Found 2 unresponsive scalable apps that can not be recovered but framework/db backup available.
py33s (id: 52b2a3c9655f6e14940001bd, backup-gears: 52b2a3c9655f6e14940001be, 52b2a3c9655f6e14940001bd)
a1 (id: 52b2be7c655f6e451a000001, backup-gears: 52b2be7c655f6e451a000001)
Do you want to skip all of them [yes/no]:(Warning: entering 'no' will delete the apps) no
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.Found 1 unresponsive scalable apps that are recoverable but some features/carts need to be removed.
zqphps (id: 52b28343655f6e996000000c features-to-remove: mongodb-2.2)
Do you want to fix all of them [yes/no]:(Warning: entering 'yes' will remove features from apps) no
Total time: 199.526s
Finished at: 2013-12-19 05:30:41 -0500

Comment 4 Liang Xia 2014-01-17 06:46:52 UTC
Reproduced on devenv_4236 with steps as in description.

# oo-admin-ctl-app -c remove-cartridge -l lxia --app phps --cartridge cron-1.4

No request sent, we did not discover any nodes.Success

Comment 5 Ravi Sankar 2014-01-21 00:28:33 UTC
Fixed in https://github.com/openshift/origin-server/pull/4529

Comment 6 openshift-github-bot 2014-01-21 02:11:46 UTC
Commit pushed to master at https://github.com/openshift/origin-server

https://github.com/openshift/origin-server/commit/2ea36c033d984a4a5c4c6e87bc9d1943b65892d2
Bug 1028919 - Do not make mcollective call for unsubscribe connection op when there is nothing to unsubscribe

Comment 7 Liang Xia 2014-01-21 06:12:27 UTC
Checked on devenv_4248,

Issue in comment #0 has been fixed,
# oo-admin-ctl-app -c remove-cartridge -l lxia -a phps  --cartridge cron-1.4
Success

Issue in comment #1 can be reproduced,
# rm -rf /var/lib/openshift/528341ba51b0a0d7f1000141/
# oo-admin-ctl-app -c destroy -l lxia -a phps
    !!!! WARNING !!!! WARNING !!!! WARNING !!!!
    You are about to delete the phps application.  
    This is NOT reversible, all remote data for this application will be removed.
Do you want to delete this application (y/n): y
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.Successfully deleted application: phps

Issue in comment #2 can be reproduced,
# oo-admin-repair --removed-nodes
Started at: 2014-01-21 06:09:24 UTC
Total gears found in mongo: 4
Servers that are unresponsive:
	Server: ip-10-138-53-103 (district: NONE), Confirm [yes/no]: yes
Some servers are unresponsive: ip-10-138-53-103
Found 1 unresponsive unscalable apps:
diy (id: 52de05abe4a4fdb5ad000005)
These apps can not be recovered. Do you want to delete all of them [yes/no]: yes
Found 1 unresponsive scalable apps that can not be recovered.
phps (id: 52de0ddde4a4fdb5ad000094)
Do you want to delete all of them [yes/no]: yes
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
No request sent, we did not discover any nodes.
Finished at: 2014-01-21 06:11:08 UTC
Total time: 104.237s
SUCCESS

Comment 8 Liang Xia 2014-01-21 11:04:31 UTC
This can be reproduced when add/remove capacity to district on devenv_4248.

# oo-admin-ctl-district -c remove-capacity -n  d1 -s 1
No request sent, we did not discover any nodes.Success!
{"_id"=>"52de1df3e502d4737c000001",
 "active_server_identities_size"=>0,
 "available_capacity"=>5999,
 "available_uids"=>"<5999 uids hidden>",
 "created_at"=>2014-01-21 07:12:51 UTC,
 "gear_size"=>"small",
 "max_capacity"=>5999,
 "max_uid"=>6998,
 "name"=>"d1",
 "server_identities"=>[],
 "updated_at"=>2014-01-21 07:12:51 UTC,
 "uuid"=>"186454427227555643785216"}

# oo-admin-ctl-district -c add-capacity -n  d1 -s 1
No request sent, we did not discover any nodes.Success!
{"_id"=>"52de1df3e502d4737c000001",
 "active_server_identities_size"=>0,
 "available_capacity"=>6000,
 "available_uids"=>"<6000 uids hidden>",
 "created_at"=>2014-01-21 07:12:51 UTC,
 "gear_size"=>"small",
 "max_capacity"=>6000,
 "max_uid"=>6999,
 "name"=>"d1",
 "server_identities"=>[],
 "updated_at"=>2014-01-21 07:12:51 UTC,
 "uuid"=>"186454427227555643785216"}

Comment 9 openshift-github-bot 2014-02-12 23:30:08 UTC
Commits pushed to master at https://github.com/openshift/origin-server

https://github.com/openshift/origin-server/commit/1f6d7ce1906506e05c05104be3ab8ba5edcfc92f
Bug 1028919 - Avoid spurious calls to mcollective rpc interface in case of parallel op execution

https://github.com/openshift/origin-server/commit/1204d158432da13cf1be7f7282818954f16c23f8
Merge pull request #4750 from pravisankar/dev/ravi/bug1028919

Merged by openshift-bot