RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1211366 - Error creating a user when jumping from an original server to replica
Summary: Error creating a user when jumping from an original server to replica
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: ipa
Version: 8.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: IPA Maintainers
QA Contact: Namita Soman
URL:
Whiteboard:
Depends On: 1029640
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-04-13 18:48 UTC by Tomas Capek
Modified: 2020-11-03 20:32 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-22 11:36:10 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Tomas Capek 2015-04-13 18:48:51 UTC
In a training session today, a curious error occurred. We were working at

https://rhevm.idm.lab.eng.brq.redhat.com/ovirt-engine/userportal

using the RHEL-7.1-x86_64-developer-brq VM template

We installed a server, then created a replica of it, then shut down the original server and accessed the IdM web UI through the replica.

When we tried to add a random new user, the following error popped up: 

Operations error: Allocation of a new value for range cn=posix ids,cn=distributed numeric assignment plugin,cn=plugins,cn=config failed! Unable to proceed. 


Even though the attempt made could be done too soon after the replica was created, Petr suggested this warrants further investigation.

Comment 1 Rob Crittenden 2015-04-13 18:56:24 UTC
A replica doesn't get a range of ID's to assign until the first time it needs one. It gets this range from the master it was created from. Since you had already shut that master down there was nowhere to get this range of IDs, hence the error.

You can  use the ipa-replica-manage command to set a range, though it is important that it not overlap with any other masters or IDs already issued (in your case this barely applies).

Comment 3 Petr Spacek 2015-04-14 06:53:17 UTC
My understanding is that Tomas's complaint is mainly about usability: This behavior can be logical from implementation's point of view but definitelly surprises unsuspecting admin/user.

We have discussed this with Tomas and agreed that this warrants at least command to get the ranges to the new replica (+ I would say necessary doc note).

The reasoning behind that is quite straighforward:
Admin expects the replica to function just fine when the older replica is down. That is the reason why he is creating replica in first place, so anything which breaks this assumption is bad.

Comment 4 Tomas Capek 2015-04-14 16:05:56 UTC
(In reply to Rob Crittenden from comment #1)
> A replica doesn't get a range of ID's to assign until the first time it
> needs one. It gets this range from the master it was created from. Since you
> had already shut that master down there was nowhere to get this range of
> IDs, hence the error.
> 
> You can  use the ipa-replica-manage command to set a range, though it is
> important that it not overlap with any other masters or IDs already issued
> (in your case this barely applies).

One immediate question occurred to me: Could the replica easily get its ID range as part of the creation process? Could it be the default behavior to prevent this bug (and possibly other, related bugs) from occurring?

From the user's perspective, it is always better to work out things like this under the hood if possible rather than documenting a limitation that is counter-intuitive, especially for newcomers.

However, just a documentation update is certainly an option here.

Comment 5 Rob Crittenden 2015-04-14 20:22:42 UTC
The "best" way to get a range is to let DNA do it (to have locking, etc.)

When a range is assigned half of the current server's range is given. There are some installs that want to centralize all object creation on a few servers, so assigning a range can deplete range significantly.

The question isn't so much "can we?" as it is "should we?"

Comment 6 Petr Spacek 2015-04-16 12:22:24 UTC
Rob, are you saying that server is not able to deal with situation where it's own range is depleted?

I could be the underlying problem.

Comment 7 Rob Crittenden 2015-04-16 13:04:03 UTC
AFAIU the initial master, yes, since it has no other server to ask for additional values. A replica will ask it's creator for more values if it runs out, assuming that server is accessible.

The underlying problem is that ranges aren't assigned until they are needed, and that is somewhat for a reason, as stated above.

ipa-replica-manage has DNA range commands which can help resolve issues that come up, and when a replica is deleted an attempt is made to recover any range it has assigned.

Comment 8 Petr Spacek 2015-04-16 13:15:49 UTC
Tomas and me thinks that current behavior is not user friendly so I'm trying to figure out how it could be improved.

What prevents the original master from requesting part of the range back from another replicas (which received fractions of the range from the original master)?

Comment 9 Martin Kosek 2015-04-21 11:50:28 UTC
This will help, yes. We are tracking the request in Bug 1029640. There is still a chance it would be fixed in RHEL-7.2.

I would suggest closing this Bugzilla as duplicate to the bug above.

Comment 10 Petr Spacek 2015-04-21 12:15:39 UTC
I would say that bug 1029640 is a pre-requisite for fixing this bug, not a duplicate. 

We could freely distribute ranges during ipa-replica-install when bug 1029640 is closed because there will much smaller change that something will go wrong.

(My understanding is that the original reason why ranges were not distribured during replica installation - i.e. range depletion - will not be valid anymore, is that right?)

Comment 11 Petr Spacek 2015-05-27 08:01:00 UTC
This is happening in the field, too:
https://www.redhat.com/archives/freeipa-users/2015-May/msg00515.html

Comment 12 Petr Vobornik 2015-06-17 15:59:51 UTC
Upstream ticket:
https://fedorahosted.org/freeipa/ticket/5070

Comment 14 Petr Vobornik 2017-02-23 16:18:38 UTC
The bug doesn't have high enough priority in comparison to other IdM bugs/RFEs for 7.4. Moving to next release. Without sufficient justification it can be moved again later.

Comment 18 Alexander Bokovoy 2019-07-30 11:24:00 UTC
With ipa-healthcheck we certainly can have a rule that warns that this replica has no ranges yet. We also could have Web UI popup/warning telling the same and telling that shutting down a master of this replica would cause problems as no sub-range is allocated yet.

It means there are two tickets out of this bug that could be fixed.

Comment 19 Alexander Bokovoy 2019-07-30 11:32:04 UTC
Another note -- for POSIX IDs we have now the range in a replicated subtree. This means that all replicas have information about ranges and can do a decision whether a range is allocated to them.

Comment 22 Petr Čech 2020-10-22 11:36:10 UTC
This BZ has been evaluated multiple times over the last several years and we assessed that it is a valuable request to keep in the backlog and address it at some point in future. Time showed that we did not have such capacity, nor have it now nor will have in the foreseeable future. In such a situation keeping it in the backlog is misleading and setting the wrong expectation that we will be able to address it. Unfortunately we will not. To reflect this we are closing this BZ. If you disagree with the decision please reopen or open a new support case and create a new BZ. However this does not guarantee that the request will not be closed during the triage as we are currently applying much more rigor to what we actually can accomplish in the foreseeable future. Contributions and collaboration in the upstream community and CentOS Stream is always welcome!
Thank you for understanding.
Red Hat Enterprise Linux Identity Management Team


Note You need to log in before you can comment on or make changes to this bug.