Red Hat Bugzilla – Bug 849197
Both haproxy gear group and web gear group should have the same qutoa.
Last modified: 2015-05-14 22:03:15 EDT
Description of problem:
Due to web gear will sync the content of haproxy gear, once haproxy gear is set qutoa to a larger value than web gear, sync will fail.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1.Create a scalable app
2.Set qutoa to 2G for haproxy gear group by calling rest api, default qutoa of web gear is 1G.
3.Log into haproxy gear, create ~/<app_name>/phplib/testfile using dd, its size is 1.5G, so that create a dummy test scenarios.
4.Do some change in app repo, then git push
Git push failed at rsync step.
remote: + rsync -v --delete-after -az /var/lib/stickshift/01ec2d9914874c9cb22d552f7429d97d/myapp//phplib/ firstname.lastname@example.org:8b959c7aa3/phplib/
remote: building file list ... done
remote: rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32)
remote: rsync: write failed on "/var/lib/stickshift/8b959c7aa31745f894944f4071b87fbf/8b959c7aa3/phplib/testfile": Disk quota exceeded (122)
remote: rsync error: error in file IO (code 11) at receiver.c(301) [receiver=3.0.6]
remote: rsync: connection unexpectedly closed (31 bytes received so far) [sender]
remote: rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]
remote: Exit code: 1
remote: Starting application...
haproxy gear should not be allowed to set qutoa to a larger value than web gear's quota, or at least once qutoa of haproxy gear group is set, web gear group should have the same quota as haproxy gear group, so that no disk overcommit happen.
Acctually I think it is better both haproxy gear group and web gear group should have the same quota.
Becasue even if set quota for haproxy gear group is limited, but also should limit set web gear group's quota to a value less than haproxy gear group.
These are being changed to be in the same gear group upstream. Going to wait for that story (US2711 and the impl story for it next sprint) to be complete before this is fixed.