Bug 858402 - openshift tidy command (git gc --aggressive) failed to run repack
openshift tidy command (git gc --aggressive) failed to run repack
Status: CLOSED CURRENTRELEASE
Product: OpenShift Origin
Classification: Red Hat
Component: Containers (Show other bugs)
2.x
Unspecified Linux
low Severity low
: ---
: ---
Assigned To: Dan Mace
libra bugs
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-09-18 17:54 EDT by Adam Lane
Modified: 2015-05-14 18:59 EDT (History)
6 users (show)

See Also:
Fixed In Version: devenv_2552+
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-12-19 14:26:28 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Adam Lane 2012-09-18 17:54:09 EDT
Description of problem:
rhc app tidy fails due to git gc failing

OUTPUT
[test-mentormyself.rhcloud.com test.git]\> time git gc --aggressive
Counting objects: 17609, done.
Delta compression using up to 2 threads.
error: failed to run repack16664/16891)
real    3m14.716s
user    0m54.077s
sys     0m1.895s

Version-Release number of selected component (if applicable):


How reproducible: every time on my openshift account


Steps to Reproduce:
1. run git gc --aggressive on app
2.
3.
  
Actual results:
error: failed to run repack16664/16891)

Expected results:
Repack completed successfully

Additional info:
See this thread for more info:
https://openshift.redhat.com/community/forums/openshift/tidy-timeouterror#comment-23924
Comment 1 joycezhang 2012-09-18 23:55:23 EDT
I am afraid this bug can't be reproduced on devenv_2195.  Please refer to the detail steps as below:

Steps:
1. Create an app:(Used jbossas-7 app for the testing)
2. Run rhc app tidy, or time git gc --aggressive on app.


Actual results:
1. It works well when using rhc app tidy:
# rhc app tidy -a jbos1 -p a

MESSAGES:
Stopping app...
Running 'git prune'
Running 'git gc --aggressive'
Emptying log dir: /var/lib/stickshift/279250b7c5894dc789b7f54609d1f9f1/jbos1/logs/
Emptying tmp dir: /tmp/
Emptying tmp dir: /var/lib/stickshift/279250b7c5894dc789b7f54609d1f9f1/jbos1/tmp/
Emptying tmp dir: /var/lib/stickshift/279250b7c5894dc789b7f54609d1f9f1/jbos1/jbossas-7/standalone/tmp/
Starting app...


RESULT:
Success

2. "git gc --aggressive" is also successfull:

[jbos1-joycedev1.dev.rhcloud.com jbos1.git]\> time git gc --aggressive
Counting objects: 39, done.
Compressing objects: 100% (30/30), done.
Writing objects: 100% (39/39), done.
Total 39 (delta 1), reused 0 (delta 0)

real	0m0.326s
user	0m0.023s
sys	0m0.069s


Could you please give more spefic steps to reproduce it? Thanks.
Comment 2 Adam Lane 2012-09-19 16:29:37 EDT
Thank you for taking the time to look into this.  I was told by a Openshift support person to run "rhc app tidy" and when it failed, I wanted to open a bug because if it is failing for me it will probably fail for others.  

I believe you have followed the proper steps but it must be related to the size or type of data.  I am not loading in anything strange, its a DIY app so basically just a bunch of binary files.

Is there a way you can root into my Openshift account to debug?
If not, is there some file I can download from my account and upload to you?
Comment 3 joycezhang 2012-09-20 01:42:35 EDT
Thanks for your quick feedback.

Could you please help using "rhc app tail -a appname" to tail the logs and paste here for our investigation? 

Thanks again for using openshift.
Comment 4 Adam Lane 2012-09-24 16:12:23 EDT
Sorry for the delay (have newborn and spaced out).  Here is the results of the logs which is not at all what I expected.  It looks like it is having trouble killing the DIY process?  I could be wrong though so I will leave that to the experts:

$ rhc app tidy -a test -d -l enalmada@gmail.com -p PASSWORD
Contacting https://openshift.redhat.com

LOG
Stopping play application
Trying to kill proccess, attempt number 1
kill -SIGTERM 1663
[info] play - Shutdown application default Akka system.
[info] application - Refreshing RSS Feeds
[info] application - Refreshing Amazon Products
[ERROR] [09/24/2012 16:05:32.710] [application-akka.actor.default-dispatcher-6] [TaskInvocation] java.sql.SQLException: Attempting to obtain a connection from a pool that has already been shutdown.
Stack trace of location where pool was shutdown follows:
 java.lang.Thread.getStackTrace(Thread.java:1567)
 com.jolbox.bonecp.BoneCP.captureStackTrace(BoneCP.java:543)
 com.jolbox.bonecp.BoneCP.shutdown(BoneCP.java:159)
 com.jolbox.bonecp.BoneCPDataSource.close(BoneCPDataSource.java:123)
 play.api.db.BoneCPApi.shutdownPool(DB.scala:396)
 play.api.db.BoneCPPlugin$$anonfun$onStop$1.apply(DB.scala:261)
 play.api.db.BoneCPPlugin$$anonfun$onStop$1.apply(DB.scala:259)
 scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
 scala.collection.immutable.List.foreach(List.scala:45)
 play.api.db.BoneCPPlugin.onStop(DB.scala:259)
 play.api.Play$$anonfun$stop$1$$anonfun$apply$1.apply(Play.scala:75)
 play.api.Play$$anonfun$stop$1$$anonfun$apply$1.apply(Play.scala:74)
 scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
 scala.collection.immutable.List.foreach(List.scala:45)
 play.api.Play$$anonfun$stop$1.apply(Play.scala:74)
 play.api.Play$$anonfun$stop$1.apply(Play.scala:74)
 scala.Option.map(Option.scala:133)
 play.api.Play$.stop(Play.scala:73)
 play.core.server.NettyServer.stop(NettyServer.scala:73)
 play.core.server.NettyServer$$anon$2.run(NettyServer.scala:139)

javax.persistence.PersistenceException: java.sql.SQLException: Attempting to obtain a connection from a pool that has already been shutdown.
Stack trace of location where pool was shutdown follows:
 java.lang.Thread.getStackTrace(Thread.java:1567)
 com.jolbox.bonecp.BoneCP.captureStackTrace(BoneCP.java:543)
 com.jolbox.bonecp.BoneCP.shutdown(BoneCP.java:159)
 com.jolbox.bonecp.BoneCPDataSource.close(BoneCPDataSource.java:123)
 play.api.db.BoneCPApi.shutdownPool(DB.scala:396)
 play.api.db.BoneCPPlugin$$anonfun$onStop$1.apply(DB.scala:261)
 play.api.db.BoneCPPlugin$$anonfun$onStop$1.apply(DB.scala:259)
 scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
 scala.collection.immutable.List.foreach(List.scala:45)
 play.api.db.BoneCPPlugin.onStop(DB.scala:259)
 play.api.Play$$anonfun$stop$1$$anonfun$apply$1.apply(Play.scala:75)
 play.api.Play$$anonfun$stop$1$$anonfun$apply$1.apply(Play.scala:74)
 scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
 scala.collection.immutable.List.foreach(List.scala:45)
 play.api.Play$$anonfun$stop$1.apply(Play.scala:74)
 play.api.Play$$anonfun$stop$1.apply(Play.scala:74)
 scala.Option.map(Option.scala:133)
 play.api.Play$.stop(Play.scala:73)
 play.core.server.NettyServer.stop(NettyServer.scala:73)
 play.core.server.NettyServer$$anon$2.run(NettyServer.scala:139)

        at com.avaje.ebeaninternal.server.transaction.TransactionManager.createQueryTransaction(TransactionManager.java:356)
        at com.avaje.ebeaninternal.server.core.DefaultServer.createQueryTransaction(DefaultServer.java:2021)
        at com.avaje.ebeaninternal.server.core.OrmQueryRequest.initTransIfRequired(OrmQueryRequest.java:241)
        at com.avaje.ebeaninternal.server.core.DefaultServer.findList(DefaultServer.java:1468)
        at com.avaje.ebeaninternal.server.querydefn.DefaultOrmQuery.findList(DefaultOrmQuery.java:906)
        at models.RssFeed.findActive(RssFeed.java:183)
        at controllers.admin.RssFeeds.refresh(RssFeeds.java:69)
        at Global$3.run(Global.java:107)
        at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:94)
        at akka.jsr166y.ForkJoinTask$AdaptedRunnableAction.exec(ForkJoinTask.java:1381)
        at akka.jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:259)
        at akka.jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:975)
        at akka.jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1479)
        at akka.jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
Caused by: java.sql.SQLException: Attempting to obtain a connection from a pool that has already been shutdown.
Stack trace of location where pool
 was shutdown follows:
 java.lang.Thread.getStackTrace(Thread.java:1567)
 com.jolbox.bonecp.BoneCP.captureStackTrace(BoneCP.java:543)
 com.jolbox.bonecp.BoneCP.shutdown(BoneCP.java:159)
 com.jolbox.bonecp.BoneCPDataSource.close(BoneCPDataSource.java:123)
 play.api.db.BoneCPApi.shutdownPool(DB.scala:396)
 play.api.db.BoneCPPlugin$$anonfun$onStop$1.apply(DB.scala:261)
 play.api.db.BoneCPPlugin$$anonfun$onStop$1.apply(DB.scala:259)
 scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
 scala.collection.immutable.List.foreach(List.scala:45)
 play.api.db.BoneCPPlugin.onStop(DB.scala:259)
 play.api.Play$$anonfun$stop$1$$anonfun$apply$1.apply(Play.scala:75)
 play.api.Play$$anonfun$stop$1$$anonfun$apply$1.apply(Play.scala:74)
 scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
 scala.collection.immutable.List.foreach(List.scala:45)
 play.api.Play$$anonfun$stop$1.apply(Play.scala:74)
 play.api.Play$$anonfun$stop$1.apply(Play.scala:74)
 scala.Option.map(Option.scala:133)
 play.api.Play$.stop(Play.scala:73)
 play.core.server.NettyServer.stop(NettyServer.scala:73)
 play.core.server.NettyServer$$anon$2.run(NettyServer.scala:139)

        at com.jolbox.bonecp.BoneCP.getConnection(BoneCP.java:467)
        at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:114)
        at play.db.ebean.EbeanPlugin$WrappingDatasource.getConnection(EbeanPlugin.java:146)
        at com.avaje.ebeaninternal.server.transaction.TransactionManager.createQueryTransaction(TransactionManager.java:339)
        ... 13 more

[ERROR] [09/24/2012 16:05:32.712] [application-akka.actor.default-dispatcher-8] [TaskInvocation] java.sql.SQLException: Attempting to obtain a connection from a pool that has already been shutdown.
Stack trace of location where pool was shutdown follows:
 java.lang.Thread.getStackTrace(Thread.java:1567)
 com.jolbox.bonecp.BoneCP.captureStackTrace(BoneCP.java:543)
 com.jolbox.bonecp.BoneCP.shutdown(BoneCP.java:159)
 com.jolbox.bonecp.BoneCPDataSource.close(BoneCPDataSource.java:123)
 play.api.db.BoneCPApi.shutdownPool(DB.scala:396)
 play.api.db.BoneCPPlugin$$anonfun$onStop$1.apply(DB.scala:261)
 play.api.db.BoneCPPlugin$$anonfun$onStop$1.apply(DB.scala:259)
 scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
 scala.collection.immutable.List.foreach(List.scala:45)
 play.api.db.BoneCPPlugin.onStop(DB.scala:259)
 play.api.Play$$anonfun$stop$1$$anonfun$apply$1.apply(Play.scala:75)
 play.api.Play$$anonfun$stop$1$$anonfun$apply$1.apply(Play.scala:74)
 scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
 scala.collection.immutable.List.foreach(List.scala:45)
 play.api.Play$$anonfun$stop$1.apply(Play.scala:74)
 play.api.Play$$anonfun$stop$1.apply(Play.scala:74)
 scala.Option.map(Option.scala:133)
 play.api.Play$.stop(Play.scala:73)
 play.core.server.NettyServer.stop(NettyServer.scala:73)
 play.core.server.NettyServer$$anon$2.run(NettyServer.scala:139)

javax.persistence.PersistenceException: java.sql.SQLException: Attempting to obtain a connection from a pool that has already been shutdown.
Stack trace of location where pool was shutdown follows:
 java.lang.Thread.getStackTrace(Thread.java:1567)
 com.jolbox.bonecp.BoneCP.captureStackTrace(BoneCP.java:543)
 com.jolbox.bonecp.BoneCP.shutdown(BoneCP.java:159)
 com.jolbox.bonecp.BoneCPDataSource.close(BoneCPDataSource.java:123)
 play.api.db.BoneCPApi.shutdownPool(DB.scala:396)
 play.api.db.BoneCPPlugin$$anonfun$onStop$1.apply(DB.scala:261)
 play.api.db.BoneCPPlugin$$anonfun$onStop$1.apply(DB.scala:259)
 scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
 scala.collection.immutable.List.foreach(List.scala:45)
 play.api.db.BoneCPPlugin.onStop(DB.scala:259)
 play.api.Play$$anonfun$stop$1$$anonfun$apply$1.apply(Play.scala:75)
 play.api.Play$$anonfun$stop$1$$anonfun$apply$1.apply(Play.scala:74)
 scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)

 scala.collection.immutable.List.foreach(List.scala:45)
 play.api.Play$$anonfun$stop$1.apply(Play.scala:74)
 play.api.Play$$anonfun$stop$1.apply(Play.scala:74)
 scala.Option.map(Option.scala:133)
 play.api.Play$.stop(Play.scala:73)
 play.core.server.NettyServer.stop(NettyServer.scala:73)
 play.core.server.NettyServer$$anon$2.run(NettyServer.scala:139)

        at com.avaje.ebeaninternal.server.transaction.TransactionManager.createQueryTransaction(TransactionManager.java:356)
        at com.avaje.ebeaninternal.server.core.DefaultServer.createQueryTransaction(DefaultServer.java:2021)
        at com.avaje.ebeaninternal.server.core.OrmQueryRequest.initTransIfRequired(OrmQueryRequest.java:241)
        at com.avaje.ebeaninternal.server.core.DefaultServer.findList(DefaultServer.java:1468)
        at com.avaje.ebeaninternal.server.querydefn.DefaultOrmQuery.findList(DefaultOrmQuery.java:906)
        at com.avaje.ebeaninternal.util.DefaultExpressionList.findList(DefaultExpressionList.java:201)
        at models.Task.findActiveProductTasks(Task.java:339)
        at controllers.admin.Products.refresh(Products.java:110)
        at Global$4.run(Global.java:120)
        at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:94)
        at akka.jsr166y.ForkJoinTask$AdaptedRunnableAction.exec(ForkJoinTask.java:1381)
        at akka.jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:259)
        at akka.jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:975)
        at akka.jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1479)
        at akka.jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
Caused by: java.sql.SQLException: Attempting to obtain a connection from a pool that has already been shutdown.
Stack trace of location where pool was shutdown follows:
 java.lang.Thread.getStackTrace(Thread.java:1567)
 com.jolbox.bonecp.BoneCP.captureStackTrace(BoneCP.java:543)
 com.jolbox.bonecp.BoneCP.shutdown(BoneCP.java:159)
 com.jolbox.bonecp.BoneCPDataSource.close(BoneCPDataSource.java:123)
 play.api.db.BoneCPApi.shutdownPool(DB.scala:396)
 play.api.db.BoneCPPlugin$$anonfun$onStop$1.apply(DB.scala:261)
 play.api.db.BoneCPPlugin$$anonfun$onStop$1.apply(DB.scala:259)
 scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
 scala.collection.immutable.List.foreach(List.scala:45)
 play.api.db.BoneCPPlugin.onStop(DB.scala:259)
 play.api.Play$$anonfun$stop$1$$anonfun$apply$1.apply(Play.scala:75)
 play.api.Play$$anonfun$stop$1$$anonfun$apply$1.apply(Play.scala:74)
 scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
 scala.collection.immutable.List.foreach(List.scala:45)
 play.api.Play$$anonfun$stop$1.apply(Play.scala:74)
 play.api.Play$$anonfun$stop$1.apply(Play.scala:74)
 scala.Option.map(Option.scala:133)
 play.api.Play$.stop(Play.scala:73)
 play.core.server.NettyServer.stop(NettyServer.scala:73)
 play.core.server.NettyServer$$anon$2.run(NettyServer.scala:139)

        at com.jolbox.bonecp.BoneCP.getConnection(BoneCP.java:467)
        at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:114)
        at play.db.ebean.EbeanPlugin$WrappingDatasource.getConnection(EbeanPlugin.java:146)
        at com.avaje.ebeaninternal.server.transaction.TransactionManager.createQueryTransaction(TransactionManager.java:339)
        ... 14 more

Trying to kill proccess, attempt number 2
kill -SIGTERM 1663
Trying to kill proccess, attempt number 3
kill -SIGTERM 1663
Comment 5 Adam Lane 2012-09-24 16:18:41 EDT
(although it is possible that is all normal and the first part of tidy is working and killing the logs at that point i guess)
Comment 6 Dan Mace 2012-12-04 11:32:55 EST
Tidy was totally refactored this release, and DIY tidy operations should be working fine. Please re-test with devenv_2552+.
Comment 7 Jianwei Hou 2012-12-06 07:47:41 EST
Verified on devenv_2562

[php1-2562t.dev.rhcloud.com php1.git]\> time git gc --aggressive
Counting objects: 2112, done.
Compressing objects: 100% (1883/1883), done.
Writing objects: 100% (2112/2112), done.
Total 2112 (delta 983), reused 0 (delta 0)

real	0m42.288s
user	0m12.532s
sys	0m0.119s


Also tried rhc app tidy, tmp files, log files and git repo are cleaned up with rhc tidy

Before tidy:
[cake-2562t.dev.rhcloud.com git]\> du -sh cake.git/
2.7M	cake.git/

After tidy:
[cake-2562t.dev.rhcloud.com git]\> du -sh cake.git/
2.3M	cake.git/

Note You need to log in before you can comment on or make changes to this bug.