RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1405205 - crm_report should not collect /var/log/lastlog (or have some safety measures included)
Summary: crm_report should not collect /var/log/lastlog (or have some safety measures ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: pacemaker
Version: 6.9
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 6.9
Assignee: Ken Gaillot
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1405635
TreeView+ depends on / blocked
 
Reported: 2016-12-15 21:02 UTC by Jaroslav Kortus
Modified: 2017-03-21 09:52 UTC (History)
3 users (show)

Fixed In Version: pacemaker-1.1.15-4.el6
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
: 1405635 (view as bug list)
Environment:
Last Closed: 2017-03-21 09:52:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:0629 0 normal SHIPPED_LIVE pacemaker bug fix update 2017-03-21 12:29:32 UTC

Description Jaroslav Kortus 2016-12-15 21:02:20 UTC
Description of problem:
crm_report collects /var/log/lastlog file (or at least is looking for patterns in there for some reason).

Usually that is harmless, because the file is small. If it gets large, it creates a problem for grep used by crm_report, as it scans through that binary file and buffers everything up to a newline char.

You can quite easily create a file that is very large and grep will try to buffer it. Then it runs out of RAM and gets killed.

Easy way to create a large /var/log/lastlog is:
useradd -b /mnt/brawl -m -U -c "quota-sanity user" -u 10000000 quota-user-kPRAaCKm

Even though these circumstances are not exactly the common ones, we do crash on a file that most likely does not contain useful info. Can you please add a check there or just remove the grepping through it completely?

Ideally no part of crm_report should be able to eat up all memory :).

Version-Release number of selected component (if applicable):
pacemaker-cli-1.1.15-3.el6.x86_64

How reproducible:
always

Steps to Reproduce:
1. on any cluster node: useradd -b /mnt/brawl -m -U -c "quota-sanity user" -u 10000000 quota-user-kPRAaCKm
2. run crm_report that collects info from all nodes
3.

Actual results:
grep crashes (signal 6) on the affected node, maybe some files will be missing in the report.

Expected results:
* ideally skip /var/log/lastlog collection
* crm_report being more cautious on binary files with grep (doing a size check would be neat)

Additional info:

Comment 5 Ken Gaillot 2016-12-16 21:58:07 UTC
The problem is that crm_report dynamically detects what system logs are used for the cluster by grepping for a particular pattern in (up to) all files in /var/log.

It's already on the long-term plan to convert crm_report from a shell script to python, to make the file handling much more efficient.

But for 6.9 timeframe, I can make sure "file" returns "text" or "compressed" before doing the grep. That will at least skip lastlog, wtmp, etc.

Comment 6 Ken Gaillot 2016-12-19 15:37:02 UTC
Fixed by upstream commit 083488ce

Comment 8 Andrew Beekhof 2017-01-09 03:02:29 UTC
(In reply to Ken Gaillot from comment #5)
> The problem is that crm_report dynamically detects what system logs are used
> for the cluster by grepping for a particular pattern in (up to) all files in
> /var/log.
> 
> It's already on the long-term plan to convert crm_report from a shell script
> to python, to make the file handling much more efficient.

If you say so, but isn't all that turned off when it gets called by sosreport?

> 
> But for 6.9 timeframe, I can make sure "file" returns "text" or "compressed"
> before doing the grep. That will at least skip lastlog, wtmp, etc.

Comment 9 Ken Gaillot 2017-01-09 18:04:09 UTC
(In reply to Andrew Beekhof from comment #8)
> (In reply to Ken Gaillot from comment #5)
> > The problem is that crm_report dynamically detects what system logs are used
> > for the cluster by grepping for a particular pattern in (up to) all files in
> > /var/log.
> > 
> > It's already on the long-term plan to convert crm_report from a shell script
> > to python, to make the file handling much more efficient.
> 
> If you say so, but isn't all that turned off when it gets called by
> sosreport?

You're right -- this will only matter when called directly by the user (as crm_report or "pcs cluster report"). It will also be turned off if the user calls crm_report with -M. But the fix is simple, and users sometimes do call it directly.

Comment 10 michal novacek 2017-01-13 10:41:55 UTC
I have verified that it is possible to create report with /var/log/lastlog
being ~300% of ram+swap with pacemaker-1.1.15-4.

[root@virt-009 ~]# tail -f /etc/passwd
...
quota-user:x:10000000:502:quota-sanity user:/home/brawl/quota-user:/bin/bash

[root@virt-009 ~]# ls -l /var/log/lastlog 
-rw-r--r--. 1 root root 2920000292 Jan 12 11:36 /var/log/lastlog

cluster setup like this (1) (2)

[root@virt-009 ~]# pcs cluster report /tmp/le-report
Error: /tmp/le-report.tar.bz2 already exists, use --force to overwrite

[root@virt-009 ~]# pcs cluster report /tmp/le-report2
virt-009:   Calculated node list: virt-061 virt-057 virt-008 virt-062 virt-067 virt-060 virt-009 virt-018 virt-059 virt-006 virt-013 virt-007 virt-056 virt-016 virt-058 virt-014 
virt-009:   Collecting data from virt-061 virt-057 virt-008 virt-062 virt-067 virt-060 virt-009 virt-018 virt-059 virt-006 virt-013 virt-007 virt-056 virt-016 virt-058 virt-014  (01/11/17 14:10:00 to 01/12/17 14:10:34)
virt-009:   Including all logs after line 29268 from /var/log/cluster/corosync.log-20170112.gz
...

[root@virt-009 ~]# echo $?
0
[root@virt-009 ~]# ls -l /tmp/le-report.tar.bz2
-rw-r--r--. 1 root root 11095127 Jan 12 11:46 /tmp/le-report.tar.bz2

-----

> (1) pcs cluster setup
[root@virt-009 ~]# pcs status
Cluster name: STSRHTS23364
Stack: cman
Current DC: virt-006 (version 1.1.15-3.el6-e174ec8) - partition with quorum
Last updated: Thu Jan 12 11:41:04 2017          Last change: Thu Jan 12 11:05:24 2017 by root via crm_attribute on virt-009

16 nodes and 48 resources configured

Online: [ virt-006 virt-007 virt-008 virt-009 virt-013 virt-014 virt-016 virt-018 virt-056 virt-057 virt-058 virt-059 virt-060 virt-061 virt-062 virt-067 ]

Full list of resources:

 fence-virt-006 (stonith:fence_xvm):    Started virt-006
 fence-virt-007 (stonith:fence_xvm):    Started virt-007
 fence-virt-008 (stonith:fence_xvm):    Started virt-008
 fence-virt-009 (stonith:fence_xvm):    Started virt-009
 fence-virt-013 (stonith:fence_xvm):    Started virt-013
 fence-virt-014 (stonith:fence_xvm):    Started virt-014
 fence-virt-016 (stonith:fence_xvm):    Started virt-016
 fence-virt-018 (stonith:fence_xvm):    Started virt-018
 fence-virt-056 (stonith:fence_xvm):    Started virt-056
 fence-virt-057 (stonith:fence_xvm):    Started virt-057
 fence-virt-058 (stonith:fence_xvm):    Started virt-058
 fence-virt-059 (stonith:fence_xvm):    Started virt-059
 fence-virt-060 (stonith:fence_xvm):    Started virt-060
 fence-virt-061 (stonith:fence_xvm):    Started virt-061
 fence-virt-062 (stonith:fence_xvm):    Started virt-062
 fence-virt-067 (stonith:fence_xvm):    Started virt-067
 Clone Set: clvmd-clone-clone [clvmd-clone]
     Started: [ virt-006 virt-007 virt-008 virt-009 virt-013 virt-014 virt-016 virt-018 virt-056 virt-057 virt-058 virt-059 virt-060 virt-061 virt-062 virt-067 ]
 Clone Set: dlm-clone-clone [dlm-clone]
     Started: [ virt-006 virt-007 virt-008 virt-009 virt-013 virt-014 virt-016 virt-018 virt-056 virt-057 virt-058 virt-059 virt-060 virt-061 virt-062 virt-067 ]

Daemon Status:
  cman: active/disabled
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

> (2) pcs config
[root@virt-009 ~]# pcs config
Cluster Name: STSRHTS23364
Corosync Nodes:
 virt-006 virt-007 virt-008 virt-009 virt-013 virt-014 virt-016 virt-018 virt-056 virt-057 virt-058 virt-059 virt-060 virt-061 virt-062 virt-067
Pacemaker Nodes:
 virt-006 virt-007 virt-008 virt-009 virt-013 virt-014 virt-016 virt-018 virt-056 virt-057 virt-058 virt-059 virt-060 virt-061 virt-062 virt-067

Resources:
 Clone: clvmd-clone-clone
  Resource: clvmd-clone (class=ocf provider=heartbeat type=Dummy)
   Operations: start interval=0s timeout=20 (clvmd-clone-start-interval-0s)
               stop interval=0s timeout=20 (clvmd-clone-stop-interval-0s)
               monitor interval=10 timeout=20 (clvmd-clone-monitor-interval-10)
 Clone: dlm-clone-clone
  Resource: dlm-clone (class=ocf provider=heartbeat type=Dummy)
   Operations: start interval=0s timeout=20 (dlm-clone-start-interval-0s)
               stop interval=0s timeout=20 (dlm-clone-stop-interval-0s)
               monitor interval=10 timeout=20 (dlm-clone-monitor-interval-10)

Stonith Devices:
 Resource: fence-virt-006 (class=stonith type=fence_xvm)
  Attributes: delay=5 pcmk_host_check=static-list pcmk_host_list=virt-006 pcmk_host_map=virt-006:virt-006.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-006-monitor-interval-60s)
 Resource: fence-virt-007 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-007 pcmk_host_map=virt-007:virt-007.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-007-monitor-interval-60s)
 Resource: fence-virt-008 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-008 pcmk_host_map=virt-008:virt-008.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-008-monitor-interval-60s)
 Resource: fence-virt-009 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-009 pcmk_host_map=virt-009:virt-009.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-009-monitor-interval-60s)
 Resource: fence-virt-013 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-013 pcmk_host_map=virt-013:virt-013.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-013-monitor-interval-60s)
 Resource: fence-virt-014 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-014 pcmk_host_map=virt-014:virt-014.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-014-monitor-interval-60s)
 Resource: fence-virt-016 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-016 pcmk_host_map=virt-016:virt-016.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-016-monitor-interval-60s)
 Resource: fence-virt-018 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-018 pcmk_host_map=virt-018:virt-018.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-018-monitor-interval-60s)
 Resource: fence-virt-056 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-056 pcmk_host_map=virt-056:virt-056.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-056-monitor-interval-60s)
 Resource: fence-virt-057 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-057 pcmk_host_map=virt-057:virt-057.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-057-monitor-interval-60s)
 Resource: fence-virt-058 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-058 pcmk_host_map=virt-058:virt-058.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-058-monitor-interval-60s)
 Resource: fence-virt-059 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-059 pcmk_host_map=virt-059:virt-059.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-059-monitor-interval-60s)
 Resource: fence-virt-060 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-060 pcmk_host_map=virt-060:virt-060.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-060-monitor-interval-60s)
 Resource: fence-virt-061 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-061 pcmk_host_map=virt-061:virt-061.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-061-monitor-interval-60s)
 Resource: fence-virt-062 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-062 pcmk_host_map=virt-062:virt-062.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-062-monitor-interval-60s)
 Resource: fence-virt-067 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-067 pcmk_host_map=virt-067:virt-067.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-067-monitor-interval-60s)
Fencing Levels:

Location Constraints:
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:

Alerts:
 No alerts defined

Resources Defaults:
 No defaults set
Operations Defaults:
 No defaults set

Cluster Properties:
 cluster-infrastructure: cman
 dc-version: 1.1.15-3.el6-e174ec8
 have-watchdog: false
Node Attributes:
 virt-009: a=a

Comment 12 errata-xmlrpc 2017-03-21 09:52:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2017-0629.html


Note You need to log in before you can comment on or make changes to this bug.