Bug 986838 - User storage can be removed to smaller size than space already used.
Summary: User storage can be removed to smaller size than space already used.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Online
Classification: Red Hat
Component: Containers
Version: 2.x
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Jhon Honce
QA Contact: libra bugs
URL:
Whiteboard:
Depends On: 994174
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-22 08:26 UTC by Qiushui Zhang
Modified: 2015-05-14 23:24 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-08-29 12:48:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Qiushui Zhang 2013-07-22 08:26:20 UTC
Description of problem:
User storage can be removed to smaller size than space is already used. If trying to do "rhc app tidy $appname", the process will fail.

Version-Release number of selected component (if applicable):
devenv_3535

How reproducible:
Always

Steps to Reproduce:
1. Create an app. Add additional storage to it, e.g. 2GB
2. SSH to the app. Create file in app-root/data/testfile. "dd if=/dev/zero of=app-root/data/testfile bs=1MB count=3000"
3. Remove additional storage. "rhc cartridge storage -a $appname -c $cartridgename --remove 1GB"
4. Try to do "rhc app tidy $appname".

Actual results:
1. The additional storage can be removed even if the left storage is smaller than that already used.
2. After the removal, operations like "rhc app tidy $appname" will fail.

Expected results:
1. User may not be approved to remove the additional storage which is smaller than he/she already uses.
2. If user is approved to remove the storage in the above condition, then he/she should be able to control the app normally.

Additional info:
The output error message when tidying the job could be something like the following:
[openshift@localhost tmp]$ rhc app tidy jb1
Unable to complete the requested operation due to: Failed to correctly execute all parallel operations.
Reference ID: a627a4ff6e3433782c11b7ab083a1a87

Comment 1 Abhishek Gupta 2013-07-22 18:05:59 UTC
We need to add checks on the node to ensure that the storage quota being set is not resulting in violation of actual usage.

Comment 2 Jhon Honce 2013-07-22 18:08:42 UTC
Are you sure you want this check?  If a customer stops paying for additional storage, this could prevent us from lowering their quota.

Comment 3 Xiaoli Tian 2013-07-23 04:52:42 UTC
(In reply to Jhon Honce from comment #2)
> Are you sure you want this check?  If a customer stops paying for additional
> storage, this could prevent us from lowering their quota.

In this condition, seems we should still make reducing storage to smaller than actual used size possible, but maybe we could raise some warning message to user if the user is trying to reduce the storage smaller than actual used storage.

Comment 4 Dan McPherson 2013-07-25 20:30:22 UTC
(In reply to xiaoli from comment #3)
> (In reply to Jhon Honce from comment #2)
> > Are you sure you want this check?  If a customer stops paying for additional
> > storage, this could prevent us from lowering their quota.
> 
> In this condition, seems we should still make reducing storage to smaller
> than actual used size possible, but maybe we could raise some warning
> message to user if the user is trying to reduce the storage smaller than
> actual used storage.

Agreed.  A user should not be able to lower their quota below what's used.

Comment 5 openshift-github-bot 2013-07-31 19:30:53 UTC
Commit pushed to master at https://github.com/openshift/origin-server

https://github.com/openshift/origin-server/commit/27dc36d2cca2a42e7439ebdcafd80df581818c46
Bug 986838 - Prevent quotas from being lowered beyond usage

* Add validation and tests

Comment 6 Qiushui Zhang 2013-08-01 03:08:35 UTC
Verified on devenv_3597,
Now, used can not reduce the storage beyond usage. But the error message is not so good. It is now something like the following:

Set storage on cartridge ... 
Unable to complete the requested operation due to: Failed to correctly execute all parallel operations.
Reference ID: 7b1cfe2aa20d16eaeecebf71fb9f7a19

It would be better we give more user friendly message.

Mark it as assigned for more friendly error message.

Comment 7 openshift-github-bot 2013-08-02 01:11:38 UTC
Commit pushed to master at https://github.com/openshift/origin-server

https://github.com/openshift/origin-server/commit/9b70ce4b4687b7f9f651921c1add445faa41ec63
Bug 986838 - Prevent quotas from being lowered beyond usage

* Format error message for Broker to pick up

Comment 8 Qiushui Zhang 2013-08-02 11:15:13 UTC
Check on devenv_3606, the error prompt is not changed.
Set storage on cartridge ... 
Unable to complete the requested operation due to: Failed to correctly execute all parallel operations.
Reference ID: ad58ee1c7ef4740b46ac90cfdc9403cd

Mark it as asssigned.

Comment 10 Qiushui Zhang 2013-08-09 01:05:59 UTC
I see the same result as you following your steps. But if I using "rhc cartridge storage perl-5.10 -a perl001 --add 1GB" to add the storage, I still see the following message:
Set storage on cartridge ... 
Unable to complete the requested operation due to: Failed to correctly execute all parallel operations.
Reference ID: df49d9d52335e9c5edac0d32510b3bee

Comment 11 Qiushui Zhang 2013-08-09 02:47:06 UTC
An example of my procedures to reproduce could be like the following:

rhc app create php1 php-5.3
rhc cartridge php-5.3 -a php1 --add 2GB
rhc ssh php1

#In app gear shell
dd if=/dev/zero of=~/app-root/data bs=1M count=3000
exit
#Back to my desktop

rhc cartridge php-5.3 -a php1 --remove 2GB

At this point, I still get the meanless error messages like the above comment.

Comment 12 Qiushui Zhang 2013-08-09 02:50:42 UTC
Sorry for typo - missing "storage" in last comments. It should be "rhc cartridge storage php-5.3 -a php1 --add 2GB" and "rhc cartridge storage php-5.3 -a php1 --remove 2GB"

Comment 13 Jhon Honce 2013-08-13 00:42:47 UTC
Expected message cannot be reported until this bug https://bugzilla.redhat.com/show_bug.cgi?id=994174 is closed.

The expected message can be found logged in the Nodes /var/log/mcollective.log.

Comment 14 Qiushui Zhang 2013-08-13 05:44:57 UTC
According to dev's comments, mark this issue as verified to close the defect. We will retest this problem when the bug 994174 is fixed.


Note You need to log in before you can comment on or make changes to this bug.