Bug 1211366

Summary: Error creating a user when jumping from an original server to replica
Product: Red Hat Enterprise Linux 8 Reporter: Tomas Capek <tcapek>
Component: ipaAssignee: IPA Maintainers <ipa-maint>
Status: CLOSED WONTFIX QA Contact: Namita Soman <nsoman>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 8.0CC: abokovoy, mkosek, nkinder, pasik, pcech, pvoborni, rcritten, tscherf
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-10-22 11:36:10 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1029640    
Bug Blocks:    

Description Tomas Capek 2015-04-13 18:48:51 UTC
In a training session today, a curious error occurred. We were working at

https://rhevm.idm.lab.eng.brq.redhat.com/ovirt-engine/userportal

using the RHEL-7.1-x86_64-developer-brq VM template

We installed a server, then created a replica of it, then shut down the original server and accessed the IdM web UI through the replica.

When we tried to add a random new user, the following error popped up: 

Operations error: Allocation of a new value for range cn=posix ids,cn=distributed numeric assignment plugin,cn=plugins,cn=config failed! Unable to proceed. 


Even though the attempt made could be done too soon after the replica was created, Petr suggested this warrants further investigation.

Comment 1 Rob Crittenden 2015-04-13 18:56:24 UTC
A replica doesn't get a range of ID's to assign until the first time it needs one. It gets this range from the master it was created from. Since you had already shut that master down there was nowhere to get this range of IDs, hence the error.

You can  use the ipa-replica-manage command to set a range, though it is important that it not overlap with any other masters or IDs already issued (in your case this barely applies).

Comment 3 Petr Spacek 2015-04-14 06:53:17 UTC
My understanding is that Tomas's complaint is mainly about usability: This behavior can be logical from implementation's point of view but definitelly surprises unsuspecting admin/user.

We have discussed this with Tomas and agreed that this warrants at least command to get the ranges to the new replica (+ I would say necessary doc note).

The reasoning behind that is quite straighforward:
Admin expects the replica to function just fine when the older replica is down. That is the reason why he is creating replica in first place, so anything which breaks this assumption is bad.

Comment 4 Tomas Capek 2015-04-14 16:05:56 UTC
(In reply to Rob Crittenden from comment #1)
> A replica doesn't get a range of ID's to assign until the first time it
> needs one. It gets this range from the master it was created from. Since you
> had already shut that master down there was nowhere to get this range of
> IDs, hence the error.
> 
> You can  use the ipa-replica-manage command to set a range, though it is
> important that it not overlap with any other masters or IDs already issued
> (in your case this barely applies).

One immediate question occurred to me: Could the replica easily get its ID range as part of the creation process? Could it be the default behavior to prevent this bug (and possibly other, related bugs) from occurring?

From the user's perspective, it is always better to work out things like this under the hood if possible rather than documenting a limitation that is counter-intuitive, especially for newcomers.

However, just a documentation update is certainly an option here.

Comment 5 Rob Crittenden 2015-04-14 20:22:42 UTC
The "best" way to get a range is to let DNA do it (to have locking, etc.)

When a range is assigned half of the current server's range is given. There are some installs that want to centralize all object creation on a few servers, so assigning a range can deplete range significantly.

The question isn't so much "can we?" as it is "should we?"

Comment 6 Petr Spacek 2015-04-16 12:22:24 UTC
Rob, are you saying that server is not able to deal with situation where it's own range is depleted?

I could be the underlying problem.

Comment 7 Rob Crittenden 2015-04-16 13:04:03 UTC
AFAIU the initial master, yes, since it has no other server to ask for additional values. A replica will ask it's creator for more values if it runs out, assuming that server is accessible.

The underlying problem is that ranges aren't assigned until they are needed, and that is somewhat for a reason, as stated above.

ipa-replica-manage has DNA range commands which can help resolve issues that come up, and when a replica is deleted an attempt is made to recover any range it has assigned.

Comment 8 Petr Spacek 2015-04-16 13:15:49 UTC
Tomas and me thinks that current behavior is not user friendly so I'm trying to figure out how it could be improved.

What prevents the original master from requesting part of the range back from another replicas (which received fractions of the range from the original master)?

Comment 9 Martin Kosek 2015-04-21 11:50:28 UTC
This will help, yes. We are tracking the request in Bug 1029640. There is still a chance it would be fixed in RHEL-7.2.

I would suggest closing this Bugzilla as duplicate to the bug above.

Comment 10 Petr Spacek 2015-04-21 12:15:39 UTC
I would say that bug 1029640 is a pre-requisite for fixing this bug, not a duplicate. 

We could freely distribute ranges during ipa-replica-install when bug 1029640 is closed because there will much smaller change that something will go wrong.

(My understanding is that the original reason why ranges were not distribured during replica installation - i.e. range depletion - will not be valid anymore, is that right?)

Comment 11 Petr Spacek 2015-05-27 08:01:00 UTC
This is happening in the field, too:
https://www.redhat.com/archives/freeipa-users/2015-May/msg00515.html

Comment 12 Petr Vobornik 2015-06-17 15:59:51 UTC
Upstream ticket:
https://fedorahosted.org/freeipa/ticket/5070

Comment 14 Petr Vobornik 2017-02-23 16:18:38 UTC
The bug doesn't have high enough priority in comparison to other IdM bugs/RFEs for 7.4. Moving to next release. Without sufficient justification it can be moved again later.

Comment 18 Alexander Bokovoy 2019-07-30 11:24:00 UTC
With ipa-healthcheck we certainly can have a rule that warns that this replica has no ranges yet. We also could have Web UI popup/warning telling the same and telling that shutting down a master of this replica would cause problems as no sub-range is allocated yet.

It means there are two tickets out of this bug that could be fixed.

Comment 19 Alexander Bokovoy 2019-07-30 11:32:04 UTC
Another note -- for POSIX IDs we have now the range in a replicated subtree. This means that all replicas have information about ranges and can do a decision whether a range is allocated to them.

Comment 22 Petr Čech 2020-10-22 11:36:10 UTC
This BZ has been evaluated multiple times over the last several years and we assessed that it is a valuable request to keep in the backlog and address it at some point in future. Time showed that we did not have such capacity, nor have it now nor will have in the foreseeable future. In such a situation keeping it in the backlog is misleading and setting the wrong expectation that we will be able to address it. Unfortunately we will not. To reflect this we are closing this BZ. If you disagree with the decision please reopen or open a new support case and create a new BZ. However this does not guarantee that the request will not be closed during the triage as we are currently applying much more rigor to what we actually can accomplish in the foreseeable future. Contributions and collaboration in the upstream community and CentOS Stream is always welcome!
Thank you for understanding.
Red Hat Enterprise Linux Identity Management Team