Bug 1479299

Summary: [RFE] deactivate/activate project for a certain period
Product: OpenShift Container Platform Reporter: Kenjiro Nakayama <knakayam>
Component: RFEAssignee: Paul Weil <pweil>
Status: CLOSED WONTFIX QA Contact: Xiaoli Tian <xtian>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.5.1CC: aos-bugs, erich, jokerman, knakayam, mmccomas, pweil
Target Milestone: ---Keywords: Reopened
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-02-08 14:06:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Kenjiro Nakayama 2017-08-08 10:26:13 UTC
1. Proposed title of this feature request

[RFE] deactivate/activate project for a certain period

3. What is the nature and description of the request?
4. Why does the customer need this? (List the business requirements here)

Let's say one project has a lot of pods for test/development. As an admin-user we wants to deploy other projects, but we do not want to clean up all of the resources.

It might be possible by scaledown all pods in the project to zero, we need to update deploymentConfig of autoscale pod and minimum num of pods. It bothers us a lot.

5. How would the customer like to achieve this? (List the functional requirements here)

One example which the customer suggested is that by using label.

7. Is there already an existing RFE upstream or in Red Hat Bugzilla?

I don't think so.

8. Does the customer have any specific timeline dependencies and which release would they like to target (i.e. RHEL5, RHEL6)?

Not exacatly. The priority is medium at this moment.

Comment 2 Paul Weil 2017-08-08 11:59:33 UTC
Kenjiro, can you clarify the request here a bit?  Why would a customer need to deactivate a project in order to deploy other projects?  Why does one project's resources have an effect on the decision to deploy another project's resources?

This is something that would have to be worked into every resource object (including anything aggregated as an api) as well as every controller in the upstream and downstream systems.  It is not something that we would pursue since isolation is already available.  I'd like to understand the reasoning behind the request though.

Thanks.

Comment 3 Kenjiro Nakayama 2017-08-08 12:25:20 UTC
Paul, for example, if there are many pods in project A, the user has to scale down all of pods. The customer thinks especially that it is bothering when they need to configure autoscale pods.

Also, although they can use another project without deactivating, existing project A keeps consuming allocatable resources on the Node, doesn't it? - https://docs.openshift.org/latest/admin_guide/allocating_node_resources.html

Comment 4 Paul Weil 2017-08-08 12:48:58 UTC
So the issue is that they do not want to consume resources for the namespace but do not want to delete the namespace if I'm understanding correctly.  

Ok.  What I would recommend here is that they create a script to scale down the scalable resources in the namespace with oc commands.  This may be something that we could pursue as a cli utility type of command, however deactivation of a namespace wouldn't be the route we would go because of how invasive it would be.

Comment 6 Kenjiro Nakayama 2017-08-11 02:11:25 UTC
I have opened bz#1480104 for the deactivation of HPA. If bz#148010 is achievable, we are happy to close this ticket.

Comment 7 Paul Weil 2017-08-11 12:58:43 UTC
Great, thanks Kenjiro.  Closing this one.

Comment 8 Kenjiro Nakayama 2018-01-09 05:22:24 UTC
re-opening this bz, as suggested bz#1480104 has not been worked at all. Paul, could you please work on either this ticket or bz#1480104 ?

Comment 9 Paul Weil 2018-02-08 14:06:28 UTC
https://bugzilla.redhat.com/show_bug.cgi?id=1480104 is assigned and is part of the planning process.  That, unfortunately, does not guarantee scheduling.  However, I see that the pulls you linked in the other bug are merged.  If you have not already done so I would elaborate on the BZ why the pull mentioned does not satisfy your requirements.