Bug 1479299 - [RFE] deactivate/activate project for a certain period
[RFE] deactivate/activate project for a certain period
Product: OpenShift Container Platform
Classification: Red Hat
Component: RFE (Show other bugs)
Unspecified Unspecified
unspecified Severity medium
: ---
: ---
Assigned To: Paul Weil
Xiaoli Tian
: Reopened
Depends On:
  Show dependency treegraph
Reported: 2017-08-08 06:26 EDT by Kenjiro Nakayama
Modified: 2018-02-08 09:06 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2018-02-08 09:06:28 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Kenjiro Nakayama 2017-08-08 06:26:13 EDT
1. Proposed title of this feature request

[RFE] deactivate/activate project for a certain period

3. What is the nature and description of the request?
4. Why does the customer need this? (List the business requirements here)

Let's say one project has a lot of pods for test/development. As an admin-user we wants to deploy other projects, but we do not want to clean up all of the resources.

It might be possible by scaledown all pods in the project to zero, we need to update deploymentConfig of autoscale pod and minimum num of pods. It bothers us a lot.

5. How would the customer like to achieve this? (List the functional requirements here)

One example which the customer suggested is that by using label.

7. Is there already an existing RFE upstream or in Red Hat Bugzilla?

I don't think so.

8. Does the customer have any specific timeline dependencies and which release would they like to target (i.e. RHEL5, RHEL6)?

Not exacatly. The priority is medium at this moment.
Comment 2 Paul Weil 2017-08-08 07:59:33 EDT
Kenjiro, can you clarify the request here a bit?  Why would a customer need to deactivate a project in order to deploy other projects?  Why does one project's resources have an effect on the decision to deploy another project's resources?

This is something that would have to be worked into every resource object (including anything aggregated as an api) as well as every controller in the upstream and downstream systems.  It is not something that we would pursue since isolation is already available.  I'd like to understand the reasoning behind the request though.

Comment 3 Kenjiro Nakayama 2017-08-08 08:25:20 EDT
Paul, for example, if there are many pods in project A, the user has to scale down all of pods. The customer thinks especially that it is bothering when they need to configure autoscale pods.

Also, although they can use another project without deactivating, existing project A keeps consuming allocatable resources on the Node, doesn't it? - https://docs.openshift.org/latest/admin_guide/allocating_node_resources.html
Comment 4 Paul Weil 2017-08-08 08:48:58 EDT
So the issue is that they do not want to consume resources for the namespace but do not want to delete the namespace if I'm understanding correctly.  

Ok.  What I would recommend here is that they create a script to scale down the scalable resources in the namespace with oc commands.  This may be something that we could pursue as a cli utility type of command, however deactivation of a namespace wouldn't be the route we would go because of how invasive it would be.
Comment 6 Kenjiro Nakayama 2017-08-10 22:11:25 EDT
I have opened bz#1480104 for the deactivation of HPA. If bz#148010 is achievable, we are happy to close this ticket.
Comment 7 Paul Weil 2017-08-11 08:58:43 EDT
Great, thanks Kenjiro.  Closing this one.
Comment 8 Kenjiro Nakayama 2018-01-09 00:22:24 EST
re-opening this bz, as suggested bz#1480104 has not been worked at all. Paul, could you please work on either this ticket or bz#1480104 ?
Comment 9 Paul Weil 2018-02-08 09:06:28 EST
https://bugzilla.redhat.com/show_bug.cgi?id=1480104 is assigned and is part of the planning process.  That, unfortunately, does not guarantee scheduling.  However, I see that the pulls you linked in the other bug are merged.  If you have not already done so I would elaborate on the BZ why the pull mentioned does not satisfy your requirements.

Note You need to log in before you can comment on or make changes to this bug.