RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1650697 - sosreport fails to generate archive due to No space left on device but returns exit status 0
Summary: sosreport fails to generate archive due to No space left on device but return...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: sos
Version: 8.4
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Pavel Moravec
QA Contact: Upgrades and Supportability
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-16 21:33 UTC by Larry O'Leary
Modified: 2021-03-15 07:31 UTC (History)
8 users (show)

Fixed In Version: sos-4.0-2.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-03-15 07:31:40 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github sosreport sos issues 2071 0 None closed sosreport: cmd return should be != 0 when there is no space on device 2021-02-19 14:28:45 UTC

Description Larry O'Leary 2018-11-16 21:33:07 UTC
Description of problem:
If there is insufficient space on the device when executing the various plug-ins, it is possible that no sosreport archive is generated. However, the exist status of sosreport is 0 giving the impression that sosreport finished successfully.

Version-Release number of selected component (if applicable):
3.6-11.el7_6

How reproducible:
Always

Steps to Reproduce:
```sh
_tmp_dir='/var/tmp/sostest'
[ -n "$_tmp_dir" ] && [ -e "$_tmp_dir" ] && rm -rf "$_tmp_dir"
mkdir -p "$_tmp_dir"
mkdir -p "$_tmp_dir"/sosreport
_free_space=$(echo $(($(stat -f --format="%a*%S" "$_tmp_dir"))))
_big_file_size=$(echo $(($_free_space-31457280)))
_dd_count=$(echo $(($_big_file_size/4194304)))
[ $_dd_count -gt 0 ] && {
    dd if=/dev/urandom of="$_tmp_dir"/bigfile bs=4M count=$_dd_count iflag=fullblock
}
sosreport --log-size=100 --batch --no-report --tmp-dir="${_tmp_dir}"/sosreport > "${_tmp_dir}"/sosreport.out 2> "${_tmp_dir}"/sosreport.err
echo "Exit State: $?"
```


Actual results:
- Exit Status: 0
- $_tmp_dir/sosreport.out includes the following output:

        No space left on device while collecting plugin data

- $_tmp_dir/sosreport/ directory is empty.

Expected results:
- Exit Status: 1
- $_tmp_dir/sosreport.out includes the following output:

        No space left on device while collecting plugin data

- $_tmp_dir/sosreport/ directory is empty.

Comment 2 Pavel Moravec 2018-11-18 09:29:06 UTC
Filip, as you were involved in handling disk full exceptions recently, could you please suggest a patch here / enhance your work by proper exit value (there will be _several_ such scenarios, like sosreport fails even during loading plugins, during generation of tarball etc.)?

Comment 3 Bryn M. Reeves 2018-11-19 12:51:34 UTC
I think we've since done a lot of work on error handling in general; it looks like some of the exit status propagation is not working properly now. I'm not sure we can blame that on Filip's work yet ;-)

I will try to look at this this week, time permitting.

Comment 4 Pavel Moravec 2018-11-19 13:02:48 UTC
(In reply to Bryn M. Reeves from comment #3)
> I think we've since done a lot of work on error handling in general; it
> looks like some of the exit status propagation is not working properly now.
> I'm not sure we can blame that on Filip's work yet ;-)
> 
> I will try to look at this this week, time permitting.

I verified this is _not_ a regression, same behaviour was seen in 3.4, 3.5 and 3.6. I checked we exit with 0 due to:

https://github.com/sosreport/sos/blob/master/sos/sosreport.py#L1491

so the error handling that Filip improved shall be yet improved to end up with proper exit value, in scenarios when it makes sense.

Comment 5 Pavel Moravec 2019-03-29 11:25:50 UTC
Scope of 7.7 closed, rescheduling for potential inclusion in 7.8.

Comment 8 Filip Krska 2019-11-06 15:40:12 UTC
The reproducer from #c0 still applies to sos-3.8-3.el7.noarch, sos-3.7-4.el8.noarch

Following patch helped me to propagate it in my env (1minutetip el7.7, el8.1)

# diff -up /usr/lib/python3.6/site-packages/sos/sosreport.py /tmp/sosreport.py.new
--- /usr/lib/python3.6/site-packages/sos/sosreport.py	2019-09-12 02:53:26.000000000 -0400
+++ /tmp/sosreport.py.new	2019-11-06 10:30:19.044742210 -0500
@@ -1048,6 +1048,7 @@ class SoSReport(object):
             if e.errno in fatal_fs_errors:
                 self.ui_log.error("\n %s while collecting plugin data\n"
                                   % e.strerror)
+                raise
                 self._exit(1)
             self.handle_exception(plugname, "collect")
         except Exception:
@@ -1363,6 +1364,8 @@ class SoSReport(object):
             self.version()
             return self.final_work()
 
+        except (IOError):
+            self._exit(1)
         except (OSError):
             if self.opts.debug:
                 raise

-----------------

# sosreport --log-size=100 --batch --no-report --tmp-dir="${_tmp_dir}"/sosreport; echo "Exit State: $?"
...
  Starting 84/84 yum             [Running: dnf dracut rpm selinux services soundcard ssh subscription_manager sunrpc system systemd sysvipc teamd tuned usb udev x11 xfs yum]

 No space left on device while collecting plugin data

Exit State: 1

Comment 9 Pavel Moravec 2020-06-11 06:50:26 UTC
Should be fixed by upstream https://github.com/sosreport/sos/issues/2071 .

Will appear in RHEL8.4

Comment 11 Pavel Moravec 2020-11-04 21:10:11 UTC
This should have been fixed in sos-4.0-2 for RHEL8.4. If interested, you can check if it really does:

A yum repository for the build of sos-4.0-2.el8 (task 32548242) is available at:

http://brew-task-repos.usersys.redhat.com/repos/official/sos/4.0/2.el8/

You can install the rpms locally by putting this .repo file in your /etc/yum.repos.d/ directory:

http://brew-task-repos.usersys.redhat.com/repos/official/sos/4.0/2.el8/sos-4.0-2.el8.repo

RPMs and build logs can be found in the following locations:
http://brew-task-repos.usersys.redhat.com/repos/official/sos/4.0/2.el8/noarch/

The full list of available rpms is:
http://brew-task-repos.usersys.redhat.com/repos/official/sos/4.0/2.el8/noarch/sos-4.0-2.el8.src.rpm
http://brew-task-repos.usersys.redhat.com/repos/official/sos/4.0/2.el8/noarch/sos-4.0-2.el8.noarch.rpm
http://brew-task-repos.usersys.redhat.com/repos/official/sos/4.0/2.el8/noarch/sos-audit-4.0-2.el8.noarch.rpm

The repository will be available for the next 50 days. Scratch build output will be deleted
earlier, based on the Brew scratch build retention policy.

Comment 13 RHEL Program Management 2021-03-15 07:31:40 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.