Bug 1610659 (CVE-2018-10923) - CVE-2018-10923 glusterfs: I/O to arbitrary devices on storage server
Summary: CVE-2018-10923 glusterfs: I/O to arbitrary devices on storage server
Keywords:
Status: CLOSED ERRATA
Alias: CVE-2018-10923
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
high
high
Target Milestone: ---
Assignee: Red Hat Product Security
QA Contact:
URL:
Whiteboard:
Depends On: 1610966 1616829 1625091 1625092 1625096 1625648
Blocks: 1609599
TreeView+ depends on / blocked
 
Reported: 2018-08-01 06:55 UTC by Sam Fowler
Modified: 2021-02-16 23:50 UTC (History)
31 users (show)

Fixed In Version: glusterfs-3.12.14, glusterfs-4.1.4
Doc Type: If docs needed, set a value
Doc Text:
It was found that the "mknod" call derived from mknod(2) can create files pointing to devices on a glusterfs server node. An authenticated attacker could use this to create an arbitrary device and read data from any device attached to the glusterfs server node.
Clone Of:
Environment:
Last Closed: 2019-06-10 10:34:44 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 0 None None None 2018-09-04 06:25:01 UTC
Red Hat Product Errata RHSA-2018:2608 0 None None None 2018-09-04 06:26:26 UTC
Red Hat Product Errata RHSA-2018:3470 0 None None None 2018-11-05 14:57:23 UTC

Description Sam Fowler 2018-08-01 06:55:18 UTC
The Gluster file system implements a series of file system operations in the form of remote procedure calls (RPC). They are transported over the wire using TCP, optionally protected by SSL/TLS.

The "mknod" call, derived from mknod(2), can create files pointing to devices ("device special file"). Such device files can be opened on the server using normal I/O operations. As a consequence it's possible to read arbitrary devices.

Comment 1 Sam Fowler 2018-08-01 06:55:28 UTC
Acknowledgments:

Name: Michael Hanselmann (hansmi.ch)

Comment 5 Doran Moppert 2018-08-02 01:03:11 UTC
Statement:

This issue did not affect Red Hat Enterprise Linux 6 and 7 as the flaw is present in glusterfs-server, which is not shipped there.

This flaw affects glusterfs versions included in Red Hat Virtualization 4 Hypervisor. However, in recommended configurations, the vulnerability is only exposed to hypervisor administrators and can not be exploited from virtual machines or other hosts on the network. For Red Hat Virtualization, Product Security has rated this flaw as Moderate. For additional information, refer to the Issue Severity Classification: https://access.redhat.com/security/updates/classification/.

Comment 9 Doran Moppert 2018-09-04 05:52:25 UTC
Created glusterfs tracking bugs for this issue:

Affects: fedora-all [bug 1625091]

Comment 11 errata-xmlrpc 2018-09-04 06:24:50 UTC
This issue has been addressed in the following products:

  Red Hat Gluster Storage 3.4 for RHEL 7
  Native Client for RHEL 7 for Red Hat Storage

Via RHSA-2018:2607 https://access.redhat.com/errata/RHSA-2018:2607

Comment 12 errata-xmlrpc 2018-09-04 06:26:14 UTC
This issue has been addressed in the following products:

  Red Hat Gluster Storage 3.4 for RHEL 6
  Native Client for RHEL 6 for Red Hat Storage

Via RHSA-2018:2608 https://access.redhat.com/errata/RHSA-2018:2608

Comment 13 Siddharth Sharma 2018-09-04 14:44:42 UTC
upstream fix:

https://review.gluster.org/21069

Comment 14 Siddharth Sharma 2018-09-04 14:44:56 UTC
Mitigation:

To limit exposure of gluster server nodes :  

1. gluster server should be on LAN and not reachable from public networks.  
2. Use gluster auth.allow and auth.reject.  
3. Use TLS certificates to authenticate gluster clients.

caveat: This does not protect from attacks by authenticated gluster clients.

Comment 15 errata-xmlrpc 2018-11-05 14:57:11 UTC
This issue has been addressed in the following products:

  Red Hat Virtualization 4 for Red Hat Enterprise Linux 7

Via RHSA-2018:3470 https://access.redhat.com/errata/RHSA-2018:3470


Note You need to log in before you can comment on or make changes to this bug.