Bug 1746968 - rsyslog and fluentd write /etc/hostname at startup, causes systemd-hostnamed to not start
Summary: rsyslog and fluentd write /etc/hostname at startup, causes systemd-hostnamed ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.2.0
Assignee: Jeff Cantrill
QA Contact: Mike Fiedler
URL:
Whiteboard: aos-scalability-42
: 1748149 1777098 1817769 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-29 15:59 UTC by Mike Fiedler
Modified: 2023-03-24 15:21 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-16 06:38:50 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-logging-operator pull 239 0 'None' closed Bug 1746968: Remove hostpath mount /etc/hostname and pass via env var 2021-02-01 16:23:57 UTC
Red Hat Product Errata RHBA-2019:2922 0 None None None 2019-10-16 06:39:04 UTC

Internal Links: 1817769 1818086

Description Mike Fiedler 2019-08-29 15:59:04 UTC
Description of problem:

At some point one of the RHCOS nodes in the scalability GCP cluster went NotReady - not sure what triggered it.    Logging in to the node, the kubelet was not starting because systemd-hostnamed could not start:

-- Reboot --
Aug 29 11:24:10 localhost systemd[1]: Starting Hostname Service...
Aug 29 11:24:10 localhost systemd-hostnamed[1093]: Failed to read hostname and machine information: Is a directory
Aug 29 11:24:10 localhost systemd[1]: Started Hostname Service.
Aug 29 11:24:10 localhost systemd[1]: systemd-hostnamed.service: Main process exited, code=exited, status=1/FAILURE
Aug 29 11:24:10 localhost systemd[1]: systemd-hostnamed.service: Failed with result 'exit-code'.
Aug 29 11:24:10 localhost systemd[1]: systemd-hostnamed.service: Consumed 49ms CPU time
-- Reboot --
Aug 29 15:09:57 localhost systemd[1]: Starting Hostname Service...
Aug 29 15:09:57 localhost systemd-hostnamed[1096]: Failed to read hostname and machine information: Is a directory
Aug 29 15:09:57 localhost systemd[1]: Started Hostname Service.
Aug 29 15:09:57 localhost systemd[1]: systemd-hostnamed.service: Main process exited, code=exited, status=1/FAILURE
Aug 29 15:09:57 localhost systemd[1]: systemd-hostnamed.service: Failed with result 'exit-code'.
Aug 29 15:09:57 localhost systemd[1]: systemd-hostnamed.service: Consumed 50ms CPU time
Aug 29 15:18:01 localhost systemd[1]: Starting Hostname Service...
Aug 29 15:18:01 localhost systemd-hostnamed[3901]: Failed to read hostname and machine information: Is a directory
Aug 29 15:18:01 localhost systemd[1]: Started Hostname Service.
Aug 29 15:18:01 localhost systemd[1]: systemd-hostnamed.service: Main process exited, code=exited, status=1/FAILURE
Aug 29 15:18:01 localhost systemd[1]: systemd-hostnamed.service: Failed with result 'exit-code'.
Aug 29 15:18:01 localhost systemd[1]: systemd-hostnamed.service: Consumed 48ms CPU time


Looking in /etc there was an empty hostname directory owned by root.   It was not created by any humans - the machine had never been logged on to until the investigation was started.   

Moving this directory to the name hostname.bad allowed systemd-hostnamed to start and the bad directory was later deleted.

What can we capture if this happens again?


Version-Release number of selected component (if applicable): 4.2.0-0.nightly-2019-08-20-162755


How reproducible: Unknown


Steps to Reproduce:

Unknown - this particular node showed up as NotReady after running fine for 36+ hours.

Actual results:


[core@localhost ~]$ systemctl status systemd-hostnamed.service
● systemd-hostnamed.service - Hostname Service
   Loaded: loaded (/usr/lib/systemd/system/systemd-hostnamed.service; static; vendor preset: disabled)
   Active: failed (Result: exit-code) since Thu 2019-08-29 15:09:57 UTC; 1min 5s ago
     Docs: man:systemd-hostnamed.service(8)
           man:hostname(5)
           man:machine-info(5)
           https://www.freedesktop.org/wiki/Software/systemd/hostnamed
  Process: 1096 ExecStart=/usr/lib/systemd/systemd-hostnamed (code=exited, status=1/FAILURE)
 Main PID: 1096 (code=exited, status=1/FAILURE)
      CPU: 50ms

##################### kubelet logs###################################
Aug 29 15:09:57 localhost systemd[1]: Starting Hostname Service...
Aug 29 15:09:57 localhost systemd-hostnamed[1096]: Failed to read hostname and machine information: Is a directory
Aug 29 15:09:57 localhost systemd[1]: Started Hostname Service.
Aug 29 15:09:57 localhost systemd[1]: systemd-hostnamed.service: Main process exited, code=exited, status=1/FAILURE
Aug 29 15:09:57 localhost systemd[1]: systemd-hostnamed.service: Failed with result 'exit-code'.
Aug 29 15:09:57 localhost systemd[1]: systemd-hostnamed.service: Consumed 50ms CPU time

:23:59 localhost hyperkube[5706]: I0829 15:23:59.937806    5706 logs.go:300] Finish parsing log file "/var/log/pods/openshift-dns_dns-default-smnx4_29b21ecd-c886-11e9-9974-42010a000002/dns-node-resolve>
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.938016    5706 manager.go:370] Recovery completed
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.938226    5706 logs.go:300] Finish parsing log file "/var/log/pods/openshift-dns_dns-default-smnx4_29b21ecd-c886-11e9-9974-42010a000002/dns/0.log"
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.938348    5706 container.go:464] Start housekeeping for container "/system.slice/system-sshd\\x2dkeygen.slice"
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964611    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/var.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964654    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/sysroot.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964662    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/boot.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964668    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964675    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964681    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964687    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964780    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/boot.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964824    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964837    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964854    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964866    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964876    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/sysroot.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964895    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/var.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964912    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/boot.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964919    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964930    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964941    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964952    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964962    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/sysroot.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964973    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/var.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964982    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/boot.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.964992    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.965003    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.965013    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.965024    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.965034    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/sysroot.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.965044    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/var.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.965054    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/boot.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.965063    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.965072    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.965081    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.965096    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.965106    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/sysroot.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.965115    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/var.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.965124    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/boot.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.965133    5706 factory.go:113] Factory "systemd" can handle container "/system.slice/var.mount", but ignoring.
Aug 29 15:23:59 localhost hyperkube[5706]: I0829 15:23:59.966241    5706 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
Aug 29 15:24:00 localhost hyperkube[5706]: E0829 15:24:00.007153    5706 kubelet.go:2254] node "localhost" not found
Aug 29 15:24:00 localhost hyperkube[5706]: I0829 15:24:00.007165    5706 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
Aug 29 15:24:00 localhost hyperkube[5706]: I0829 15:24:00.007166    5706 kubelet.go:1832] skipping pod synchronization - container runtime status check may not have completed yet.
Aug 29 15:24:00 localhost hyperkube[5706]: E0829 15:24:00.107433    5706 kubelet.go:2254] node "localhost" not found
Aug 29 15:24:00 localhost hyperkube[5706]: E0829 15:24:00.110412    5706 controller.go:115] failed to ensure node lease exists, will retry in 400ms, error: leases.coordination.k8s.io "localhost" is forbidden: U>
Aug 29 15:24:00 localhost hyperkube[5706]: F0829 15:24:00.114404    5706 kubelet.go:1354] Kubelet failed to get node info: failed to get instance ID from cloud provider: instance not found
Aug 29 15:24:00 localhost systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Aug 29 15:24:00 localhost systemd[1]: kubelet.service: Failed with result 'exit-code'.
Aug 29 15:24:00 localhost systemd[1]: kubelet.service: Consumed 481ms CPU time

Comment 1 Micah Abbott 2019-08-29 17:14:15 UTC
(In reply to Mike Fiedler from comment #0)

> What can we capture if this happens again?

Let's start with the full journal after the reboot.  Maybe NetworkManager isn't able to set the hostname correctly and the error is logged?

Comment 3 Mike Fiedler 2019-08-30 18:17:23 UTC
On the failing systems the hostname command as root returns "localhost".   On working systems, hostname returns the real DNS name of the system

Comment 4 Mike Fiedler 2019-08-30 18:54:37 UTC
Recovery on this latest instance was

mv /etc/hostname /etc/hostname.bad
systemctl restart systemd-hostnamed
hostnamectl status and verify hostname is there
systemctl restart kubelet

Thought I tried that on the node yesterday and it did not recover, but maybe order was different.

Comment 5 Mike Fiedler 2019-09-03 13:49:27 UTC
Removing testblocker since we've figured out how to rescue the nodes in this state.

Comment 6 Dusty Mabe 2019-09-03 20:42:00 UTC
ok I think I've got some new information. It appears the rsyslog daemonset in the openshift-logging namespace has a volume mount of `/etc/hostname` into the container:

```
      - hostPath:
          path: /etc/hostname
          type: ""
        name: dockerhostname

```

It appears if `/etc/hostname` doesn't exist on the host something will create it (as a directory in this case) and thus we get an `/etc/hostname` directory created at approximately the same time as the rsyslog container is started:

```
[root@ats-cl-sn2h2-w-c-zrknt etc]# stat /etc/hostname
  File: /etc/hostname
  Size: 6               Blocks: 0          IO Block: 4096   directory
Device: 803h/2051d      Inode: 142620217   Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:etc_t:s0
Access: 2019-09-03 20:30:30.475026448 +0000
Modify: 2019-09-03 20:29:52.774519598 +0000
Change: 2019-09-03 20:29:52.774519598 +0000
 Birth: -
```

From the journal:

```
Sep 03 20:29:52 ats-cl-sn2h2-w-c-zrknt.c.openshift-perfscale.internal hyperkube[1421]: I0903 20:29:52.771555    1421 kubelet_pods.go:151] container: openshift-logging/rsyslog-nw5gd/rsyslog podIP: "10.131.97.187" creating hosts mount: true
```


Out of this investigation comes two questions:

- Should the volumemount for the rsyslog ds explicitly be for `type: File` and not a directory?
- Why is the `/etc/hostname` file not there to begin with? Is that desired behavior?

Comment 7 Micah Abbott 2019-09-03 20:52:43 UTC
Jeff, could someone from the Logging team help out with this BZ?

Comment 9 Mike Fiedler 2019-09-03 22:40:54 UTC
Adding Rich and Noriko

Comment 10 Rich Megginson 2019-09-04 01:40:00 UTC
not sure - we should either mount /etc/hostname readonly, or not mount it at all

Comment 11 Noriko Hosoi 2019-09-04 17:55:14 UTC
Compared rsyslog vs. fluentd.  In terms of hostname, they look identical.  I wonder why this issue is observed just for rsyslog... (or is it?)

$ oc rsh $FLUENTD_POD
sh-4.2# ls -l /etc/hostname
-rw-r--r--. 1 root root 14 Sep  4 17:18 /etc/hostname
sh-4.2# ls -lZ /etc/hostname
-rw-r--r--. root root system_u:object_r:container_file_t:s0:c523,c879 /etc/hostname
sh-4.2# cat /etc/hostname
fluentd-ch4jb
sh-4.2# mount | egrep hostname
tmpfs on /etc/hostname type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
/dev/xvda3 on /etc/docker-hostname type xfs (ro,relatime,seclabel,attr2,inode64,prjquota)
sh-4.2# ls -l /etc/docker-hostname/
total 0

        volumeMounts:
        - mountPath: /etc/docker-hostname
          name: dockerhostname
          readOnly: true

      volumes:
      - hostPath:
          path: /etc/hostname
          type: ""
        name: dockerhostname


$ oc rsh $RSYSLOG_POD
sh-4.2# ls -l /etc/hostname
-rw-r--r--. 1 root root 14 Sep  4 17:33 /etc/hostname
sh-4.2# ls -lZ /etc/hostname
-rw-r--r--. root root system_u:object_r:container_file_t:s0:c113,c896 /etc/hostname
sh-4.2# cat /etc/hostname
rsyslog-gvnrq
sh-4.2# mount | egrep hostname
tmpfs on /etc/hostname type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
/dev/xvda3 on /etc/docker-hostname type xfs (ro,relatime,seclabel,attr2,inode64,prjquota)
sh-4.2# ls -l /etc/docker-hostname/
total 0

        volumeMounts:
        - mountPath: /etc/docker-hostname
          name: dockerhostname
          readOnly: true

      volumes:
      - hostPath:
          path: /etc/hostname
          type: ""
        name: dockerhostname

Comment 12 Rich Megginson 2019-09-04 18:00:52 UTC
I'm not sure why we don't see a problem with fluentd - but we have several problems with rsyslog related to file/directory interactions with the host, configmaps, and secrets, that should affect fluentd the same way, but don't.  I don't know why rsyslog has these problems that fluentd does not.

For this particular problem, maybe we should just make sure all of our mounts that should not modify files/dirs are mounted readonly.

Comment 13 Noriko Hosoi 2019-09-04 18:10:40 UTC
(In reply to Rich Megginson from comment #12)
> For this particular problem, maybe we should just make sure all of our
> mounts that should not modify files/dirs are mounted readonly.

For dockerhostname, this is not good enough?
        volumeMounts:
        - mountPath: /etc/docker-hostname
          name: dockerhostname
          readOnly: true
          ^^^^^^^^^^^^^^

Note: we mount the following mount paths without readOnly: true
        - mountPath: /run/log/journal
          name: runlogjournal
        - mountPath: /var/log
          name: varlog
        - mountPath: /var/run
          name: varrun
        - mountPath: /var/lib/rsyslog.pod
          name: filebufferstorage
        - mountPath: /etc/rsyslog/metrics
          name: collector-metrics

Comment 14 Rich Megginson 2019-09-04 18:15:28 UTC
> For dockerhostname, this is not good enough?

I'm not sure how it is creating the directory on the host if it is mounted readOnly.  Maybe we need to add a `type: File` as suggested in https://bugzilla.redhat.com/show_bug.cgi?id=1746968#c6

> Note: we mount the following mount paths without readOnly: true

I think we can mount /run/log/journal readonly - we should not be writing there.  Same with /etc/rsyslog/metrics.  Everything else in that list should be writable.

Comment 15 Dusty Mabe 2019-09-04 18:54:05 UTC
(In reply to Rich Megginson from comment #14)
> > For dockerhostname, this is not good enough?
> 
> I'm not sure how it is creating the directory on the host if it is mounted
> readOnly.  Maybe we need to add a `type: File` as suggested in
> https://bugzilla.redhat.com/show_bug.cgi?id=1746968#c6

I think it's the platform (kubernetes) that is creating the directory before the
rsyslog container is started. Probably making it an optional mount and also 
adding `type: File` as I suggested before.

Comment 16 Noriko Hosoi 2019-09-04 22:13:44 UTC
(In reply to Dusty Mabe from comment #15) 
> I think it's the platform (kubernetes) that is creating the directory before the
> rsyslog container is started. Probably making it an optional mount and also 
> adding `type: File` as I suggested before.

Thanks for your comments, @Dusty.  Adding `type: File` is done.  Regarding the "optional mount", does that mean wa are supposed to introduce an environment variable or something to control whether "/etc/hostname" is mounted or not?  (I'd appreciate that you correct me if I'm wrong...)

Comment 17 Dusty Mabe 2019-09-05 13:17:39 UTC
Hi Noriko. Let me describe the behavior that we want:

- the type of mount should be `type: File`. We don't want to mount in a directory and we certainly don't want any file or directory to be created by Kubernetes on the host if it doesn't already exist.
- If the `/etc/hostname` file doesn't exist on the host the container should be able to be started and continue to operate normally.

You said you already took care of that first bullet point. Can you test to make sure the 2nd bullet point is satisfied? If not can we satisfy that condition?

Comment 18 Mike Fiedler 2019-09-05 15:19:55 UTC
We hit this today with fluentd as the collector, so a possible answer to comment 11 and comment 12.   Still have never seen this on Azure or AWS though.

Comment 19 Mike Fiedler 2019-09-05 20:33:45 UTC
Marking this testblocker again.   Not sure what triggers systemd to restart systemd-hostnamed but that is when the /etc/hostname dir issue gets noticed.  As soon as fluentd or rsyslogd start the directory gets created, but the node will stay in Ready state until something causes the restart.   Not sure of the full chain of events, but the node is in trouble from the time the collector starts, it just doesn't realize it yet.

Comment 20 Anping Li 2019-09-09 03:49:04 UTC
*** Bug 1748149 has been marked as a duplicate of this bug. ***

Comment 22 Mike Fiedler 2019-09-19 02:52:09 UTC
Verified on 4.2.0-0.nightly-2019-09-12-114308 - 250 node cluster is stable with fluentd running

Comment 25 Mike Fiedler 2019-09-20 06:48:27 UTC
This bug affects both fluentd and rsyslogd and is already fixed 4.2.

Comment 26 errata-xmlrpc 2019-10-16 06:38:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922

Comment 27 Jeff Cantrill 2019-11-27 16:02:11 UTC
*** Bug 1777098 has been marked as a duplicate of this bug. ***

Comment 28 Jeff Cantrill 2019-12-09 18:34:19 UTC
*** Bug 1777098 has been marked as a duplicate of this bug. ***

Comment 29 Ben Parees 2020-04-23 16:08:03 UTC
*** Bug 1817769 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.