RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1874391 - Avoid irrelevant error messages when crm_node is run inside a bundle
Summary: Avoid irrelevant error messages when crm_node is run inside a bundle
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: pacemaker
Version: 8.3
Hardware: Unspecified
OS: Unspecified
low
unspecified
Target Milestone: rc
: 8.4
Assignee: Reid Wahl
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On: 1885645
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-01 09:12 UTC by Michele Baldessari
Modified: 2021-05-18 15:46 UTC (History)
5 users (show)

Fixed In Version: pacemaker-2.0.5-1.el8
Doc Type: No Doc Update
Doc Text:
Bundles are currently only supported in the context of OpenStack, so we don't need to mention this generally.
Clone Of:
Environment:
Last Closed: 2021-05-18 15:26:40 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ClusterLabs pacemaker pull 2157 0 None closed Log: libcrmcommon: Warn instead of err in scan_ll() 2020-12-06 18:29:49 UTC
Red Hat Knowledge Base (Solution) 5368221 0 None None None 2020-09-03 00:18:48 UTC
Red Hat Product Errata RHEA-2021:1782 0 None None None 2021-05-18 15:46:50 UTC

Description Michele Baldessari 2020-09-01 09:12:21 UTC
Description of problem:
Hi Ken,

I think a small fallout from https://bugzilla.redhat.com/show_bug.cgi?id=1796824 is going on. I installed pacemaker-2.0.4-6 on both host and containers. crm_node -l and -p work fine:
[root@controller-0 ~]# rpm -q pacemaker
pacemaker-2.0.4-6.el8.x86_64
[root@controller-0 ~]# podman exec -it galera-bundle-podman-0 bash -c 'rpm -q pacemaker'
pacemaker-2.0.4-6.el8.x86_64
[root@controller-0 ~]# podman exec -it galera-bundle-podman-0 bash -c 'crm_node -l'
1 controller-0 member
2 controller-1 member
[root@controller-0 ~]# podman exec -it galera-bundle-podman-0 bash -c 'crm_node -p'                                                                                                                                                    
controller-0 controller-1 

crm_node -q is a bit more odd:
[root@controller-0 ~]# podman exec -it galera-bundle-podman-0 bash -c 'crm_node -q'                                                                                                                                                    
(scan_ll)       error: Could not parse integer from galera-bundle-0 (using -1 instead): No digits found
1

Comment 2 Reid Wahl 2020-09-01 10:03:56 UTC
Can you run it with tracing enabled? `podman exec -it galera-bundle-podman-0 bash -c 'crm_node -VVVVVV -q'`

Comment 3 Michele Baldessari 2020-09-01 10:06:47 UTC
[root@controller-0 ~]# podman exec -it galera-bundle-podman-0 sh -c 'crm_node -VVVVVV -q'
(set_crm_log_level) 	trace: New log level: 8
(set_crm_log_level) 	trace: New log level: 8
(pcmk_new_ipc_api) 	trace: Created controller API IPC object
(crm_ipc_connected) 	trace: No connection
(qb_rb_open_2) 	debug: shm size:131085; real_size:135168; rb->word_size:33792
(qb_rb_open_2) 	debug: shm size:131085; real_size:135168; rb->word_size:33792
(qb_rb_open_2) 	debug: shm size:131085; real_size:135168; rb->word_size:33792
(crm_user_lookup) 	trace: User hacluster has uid=189 gid=189
(mainloop_add_fd) 	trace: Added connection 1 for crmd[0x5597fa720f00].3
(connect_with_main_loop) 	debug: Connected to controller IPC (attached to main loop)
(create_hello_message) 	trace: Created hello message from crm_node (UUID 53976)
(pcmk__send_ipc_request) 	trace: ipc-sent   <create_request_adv origin="create_hello_message" t="crmd" version="3.4.1" subt="request" reference="hello-crm_node-1598954761-0" crm_task="hello" crm_sys_from="53976_crm_node">
(pcmk__send_ipc_request) 	trace: ipc-sent     <crm_xml>
(pcmk__send_ipc_request) 	trace: ipc-sent       <options major_version="1" minor_version="0" client_name="crm_node" client_uuid="53976"/>
(pcmk__send_ipc_request) 	trace: ipc-sent     </crm_xml>
(pcmk__send_ipc_request) 	trace: ipc-sent   </create_request_adv>
(crm_ipc_send) 	trace: Sending crmd IPC request 1 of 347 bytes using 5000ms timeout
(crm_ipc_send) 	trace: Not waiting for reply to crmd IPC request 1
(post_connect) 	debug: Sent IPC hello to controller
(pcmk__send_ipc_request) 	trace: ipc-sent   <create_request_adv origin="create_controller_request" t="crmd" version="3.4.1" subt="request" reference="node-info-crm_node-1598954761-1" crm_task="node-info" crm_sys_to="crmd" crm_sys_from="53976_crm_node"/>
(crm_ipc_send) 	trace: Sending crmd IPC request 2 of 250 bytes using 5000ms timeout
(crm_ipc_send) 	trace: Not waiting for reply to crmd IPC request 2
(crm_ipc_read) 	trace: Received crmd event 11, size=451, rc=451, text: <create_reply_adv origin="handle_node_info_request" t="crmd" version="3.4.1" subt="response" referen
(mainloop_gio_callback) 	trace: New message from crmd[0x5597fa720f00] = 411 (I/O condition=1)
(call_api_dispatch) 	trace: ipc-received   <create_reply_adv origin="handle_node_info_request" t="crmd" version="3.4.1" subt="response" reference="node-info-crm_node-1598954761-1" crm_task="node-info" crm_sys_to="fea97bff-9f1f-4b63-a98e-ae9fbac1bec8" crm_sys_from="crmd" src="controller-0">
(call_api_dispatch) 	trace: ipc-received     <crm_xml>
(call_api_dispatch) 	trace: ipc-received       <node crm_subsystem="crmd" have-quorum="true" id="galera-bundle-0" uname="galera-bundle-0" crmd="member" remote_node="true"/>
(call_api_dispatch) 	trace: ipc-received     </crm_xml>
(call_api_dispatch) 	trace: ipc-received   </create_reply_adv>
(scan_ll) 	error: Could not parse integer from galera-bundle-0 (using -1 instead): No digits found
(scan_ll) 	warning: Characters left over after parsing 'galera-bundle-0': 'galera-bundle-0'
1
(mainloop_del_fd) 	trace: Removing client crmd[0x5597fa720f00]
(crm_ipc_read) 	trace: No message from crmd received: Resource temporarily unavailable
(mainloop_gio_callback) 	trace: Message acquisition from crmd[0x5597fa720f00] failed: No message of desired type (-42)
(mainloop_gio_destroy) 	trace: Destroying client crmd[0x5597fa720f00]
(qb_ipcc_disconnect) 	debug: qb_ipcc_disconnect()
(qb_rb_close) 	trace: ENTERING qb_rb_close()
(qb_rb_close_helper) 	debug: Closing ringbuffer: /dev/shm/qb-8-53976-14-LDJrjk/qb-request-crmd-header
(qb_rb_close) 	trace: ENTERING qb_rb_close()
(qb_rb_close_helper) 	debug: Closing ringbuffer: /dev/shm/qb-8-53976-14-LDJrjk/qb-response-crmd-header
(qb_rb_close) 	trace: ENTERING qb_rb_close()
(qb_rb_close_helper) 	debug: Closing ringbuffer: /dev/shm/qb-8-53976-14-LDJrjk/qb-event-crmd-header
(ipc_post_disconnect) 	info: Disconnected from controller IPC API
(crm_ipc_destroy) 	trace: Destroying IPC connection to crmd: 0x5597fa71fc70
(mainloop_gio_destroy) 	trace: Destroyed client crmd[0x5597fa720f00]
(pcmk_free_ipc_api) 	debug: Releasing controller IPC API
(pcmk_free_ipc_api) 	trace: Freeing IPC API object
(mainloop_destroy_signal_entry) 	trace: Destroying signal 0
(mainloop_destroy_signal_entry) 	trace: Destroying signal 1
(mainloop_destroy_signal_entry) 	trace: Destroying signal 2
(mainloop_destroy_signal_entry) 	trace: Destroying signal 3
(mainloop_destroy_signal_entry) 	trace: Destroying signal 4
(mainloop_destroy_signal_entry) 	trace: Destroying signal 5
(mainloop_destroy_signal_entry) 	trace: Destroying signal 6
(mainloop_destroy_signal_entry) 	trace: Destroying signal 7
(mainloop_destroy_signal_entry) 	trace: Destroying signal 8
(mainloop_destroy_signal_entry) 	trace: Destroying signal 9
(mainloop_destroy_signal_entry) 	trace: Destroying signal 10
(mainloop_destroy_signal_entry) 	trace: Destroying signal 11
(mainloop_destroy_signal_entry) 	trace: Destroying signal 12
(mainloop_destroy_signal_entry) 	trace: Destroying signal 13
(mainloop_destroy_signal_entry) 	trace: Destroying signal 14
(mainloop_destroy_signal_entry) 	trace: Destroying signal 15
(mainloop_destroy_signal_entry) 	trace: Destroying signal 16
(mainloop_destroy_signal_entry) 	trace: Destroying signal 17
(mainloop_destroy_signal_entry) 	trace: Destroying signal 18
(mainloop_destroy_signal_entry) 	trace: Destroying signal 19
(mainloop_destroy_signal_entry) 	trace: Destroying signal 20
(mainloop_destroy_signal_entry) 	trace: Destroying signal 21
(mainloop_destroy_signal_entry) 	trace: Destroying signal 22
(mainloop_destroy_signal_entry) 	trace: Destroying signal 23
(mainloop_destroy_signal_entry) 	trace: Destroying signal 24
(mainloop_destroy_signal_entry) 	trace: Destroying signal 25
(mainloop_destroy_signal_entry) 	trace: Destroying signal 26
(mainloop_destroy_signal_entry) 	trace: Destroying signal 27
(mainloop_destroy_signal_entry) 	trace: Destroying signal 28
(mainloop_destroy_signal_entry) 	trace: Destroying signal 29
(mainloop_destroy_signal_entry) 	trace: Destroying signal 30
(mainloop_destroy_signal_entry) 	trace: Destroying signal 31
(mainloop_destroy_signal_entry) 	trace: Destroying signal 32
(mainloop_destroy_signal_entry) 	trace: Destroying signal 33
(mainloop_destroy_signal_entry) 	trace: Destroying signal 34
(mainloop_destroy_signal_entry) 	trace: Destroying signal 35
(mainloop_destroy_signal_entry) 	trace: Destroying signal 36
(mainloop_destroy_signal_entry) 	trace: Destroying signal 37
(mainloop_destroy_signal_entry) 	trace: Destroying signal 38
(mainloop_destroy_signal_entry) 	trace: Destroying signal 39
(mainloop_destroy_signal_entry) 	trace: Destroying signal 40
(mainloop_destroy_signal_entry) 	trace: Destroying signal 41
(mainloop_destroy_signal_entry) 	trace: Destroying signal 42
(mainloop_destroy_signal_entry) 	trace: Destroying signal 43
(mainloop_destroy_signal_entry) 	trace: Destroying signal 44
(mainloop_destroy_signal_entry) 	trace: Destroying signal 45
(mainloop_destroy_signal_entry) 	trace: Destroying signal 46
(mainloop_destroy_signal_entry) 	trace: Destroying signal 47
(mainloop_destroy_signal_entry) 	trace: Destroying signal 48
(mainloop_destroy_signal_entry) 	trace: Destroying signal 49
(mainloop_destroy_signal_entry) 	trace: Destroying signal 50
(mainloop_destroy_signal_entry) 	trace: Destroying signal 51
(mainloop_destroy_signal_entry) 	trace: Destroying signal 52
(mainloop_destroy_signal_entry) 	trace: Destroying signal 53
(mainloop_destroy_signal_entry) 	trace: Destroying signal 54
(mainloop_destroy_signal_entry) 	trace: Destroying signal 55
(mainloop_destroy_signal_entry) 	trace: Destroying signal 56
(mainloop_destroy_signal_entry) 	trace: Destroying signal 57
(mainloop_destroy_signal_entry) 	trace: Destroying signal 58
(mainloop_destroy_signal_entry) 	trace: Destroying signal 59
(mainloop_destroy_signal_entry) 	trace: Destroying signal 60
(mainloop_destroy_signal_entry) 	trace: Destroying signal 61
(mainloop_destroy_signal_entry) 	trace: Destroying signal 62
(mainloop_destroy_signal_entry) 	trace: Destroying signal 63
(mainloop_destroy_signal_entry) 	trace: Destroying signal 64
(crm_xml_cleanup) 	info: Cleaning up memory from libxml2
(crm_exit) 	info: Exiting crm_node | with status 0

Comment 4 Reid Wahl 2020-09-01 10:27:57 UTC
Thanks :)

(call_api_dispatch) 	trace: ipc-received       <node crm_subsystem="crmd" have-quorum="true" id="galera-bundle-0" uname="galera-bundle-0" crmd="member" remote_node="true"/>

Looks like set_node_info_data() in ipc_controld.c expects id to be a numeric string (e.g., "1" and "2" in my non-container cluster), and doesn't know how to handle it being "galera-bundle-0".

Comment 5 Ken Gaillot 2020-09-01 16:08:56 UTC
(In reply to Reid Wahl from comment #4)
> Looks like set_node_info_data() in ipc_controld.c expects id to be a numeric
> string (e.g., "1" and "2" in my non-container cluster), and doesn't know how
> to handle it being "galera-bundle-0".

It's expected that remote nodes don't have numeric IDs, and theoretically some future cluster stack other than corosync might not either. Thankfully this is just a log message, and the quorum value is correct.

What's surprising is that the message is output at all. Log messages shouldn't show up on stderr unless -V is given.

Interestingly, I can't reproduce the issue with a vanilla remote node, only with bundles. Unfortunately it also affects "crm_node -n". It appears that for some reason, the default log level in bundles is one higher than everywhere else.

The command still returns exit status 0, so if you're concerned about losing real errors you could grab stderr and discard it if it exits 0.

Comment 6 Reid Wahl 2020-09-01 19:31:16 UTC
The PCMK_stderr env var gets set to 1 at creation for bundles:
  - https://github.com/ClusterLabs/pacemaker/blob/master/lib/pengine/bundle.c#L211

Need to decide how to make crm_node less chatty for these (expected) scan_ll() parse errors, without hiding parse errors when they're needed.

Comment 7 Ken Gaillot 2020-09-01 21:34:49 UTC
As mentioned in Comment 6, the root cause is that Pacemaker sets PCMK_stderr to 1 in the container's environment when launching it. This causes pacemaker-remoted to log to stderr, which makes "podman logs <container>" useful. However that means it's in the environment of every command run in the container, which will make them log errors to stderr (normally seen only if -V is supplied).

Ideally, we'd stop setting that in the container environment, and instead set it inside pacemaker-remoted when it launches. That way, exec'd commands would be unaffected.

Unfortunately, that approach would change the configuration of the implicitly created container resource, causing the containers to restart when the cluster is upgraded to a version with the fix. We don't want to do that in RHEL 8, though we could consider it for RHEL 9.

Instead, for RHEL 8 we'll downgrade these particular messages from error to warning, which drops them below the threshold for being displayed to stderr (without -V). That still leaves the possibility that other error log messages will show up in other situations, but it will take care of this specific issue.

The fix is merged upstream as of commit 34a0a07, which should be in RHEL 8.4. Let me know if you will need it earlier than that.

Comment 11 pkomarov 2020-11-03 10:15:57 UTC
Verified , 

(undercloud) [stack@undercloud-0 ~]$ ansible controller -b -mshell -a'pcs cluster status'|grep version
[WARNING]: Found both group and host with same name: undercloud
   * Current DC: controller-1 (version 2.0.5-2.el8-31aa4f5515) - partition with quorum
   * Current DC: controller-1 (version 2.0.5-2.el8-31aa4f5515) - partition with quorum
   * Current DC: controller-1 (version 2.0.5-2.el8-31aa4f5515) - partition with quorum


[root@controller-2 ~]#  podman exec -it galera-bundle-podman-0 bash -c 'crm_node -q'
1

Comment 13 errata-xmlrpc 2021-05-18 15:26:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2021:1782

Comment 14 errata-xmlrpc 2021-05-18 15:46:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2021:1782


Note You need to log in before you can comment on or make changes to this bug.