Bug 1505512
| Summary: | Enable RADOS based features in downstream Ganesha builds | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Ali Maredia <amaredia> |
| Component: | Build | Assignee: | tserlin |
| Status: | CLOSED ERRATA | QA Contact: | Ramakrishnan Periyasamy <rperiyas> |
| Severity: | high | Docs Contact: | |
| Priority: | low | ||
| Version: | 3.0 | CC: | ceph-qe-bugs, flucifre, hnallurv, kdreyer, mbenjamin, rraja |
| Target Milestone: | rc | ||
| Target Release: | 3.0 | ||
| Hardware: | Unspecified | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | RHEL: nfs-ganesha-2.5.2-11.el7cp Ubuntu: nfs-ganesha_2.5.2-11redhat1xenial | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-12-05 23:49:04 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Ali Maredia
2017-10-23 18:55:59 UTC
Testing Ganesha fetching exports from RADOS objects
***************************************************
Prerequisites
=============
* A Ceph cluster with a Ceph filesystem
* 'nfs-ganesha' and 'nfs-ganesha-ceph' packages installed in client machine
* Network connectivity between Ceph cluster and NFS-Ganesha machine
Assuming ganesha host IP is 10.70.43.205, and Ceph MOn IP is 10.70.42.111
Steps
=====
In the ganesha host do the following,
1. Set /etc/ceph/ceph.conf to be able to connect to Ceph cluster
$ sudo cat /etc/ceph/ceph.conf
[global]
mon_host = 10.70.42.111:6789
2. Create an export block and store it as a RADOS object in
CephFS's data pool (here named 'cephfs_data')
$ cat export.conf
EXPORT
{
EXPORT_ID = 100;
Path = /;
Pseudo = /;
Protocols = 4;
Transports = TCP;
Squash = No_root_squash;
FSAL {
Name = Ceph;
}
CLIENT {
Clients = 10.70.43.205;
Access_Type = rw;
}
}
$ sudo rados -p cephfs_data put ganesha export.conf
2. Setup ganesha.conf to allow Ganesha to read from RADOS object
$ sudo cat /etc/ganesha/ganesha.conf
%url rados://cephfs_data/ganesha
3. Start the NFS-Ganesha server. Ganesha server should now export
the Ceph File System to 10.70.43.205, IP of Ganesha host
$ sudo systemctl start nfs-ganesha
4. NFS mount the Ceph FileSystem and try accessing it
$ sudo mount.nfs4 10.70.43.205:/ /mnt/nfs4/
$ cd /mnt/nfs4/
this is missing a QE ack. (In reply to Ram Raja from comment #9) Adding a step > Testing Ganesha fetching exports from RADOS objects > *************************************************** > > Prerequisites > ============= > * A Ceph cluster with a Ceph filesystem > * 'nfs-ganesha' and 'nfs-ganesha-ceph' packages installed in client machine > * Network connectivity between Ceph cluster and NFS-Ganesha machine > > Assuming ganesha host IP is 10.70.43.205, and Ceph MOn IP is 10.70.42.111 > Steps > ===== > In the ganesha host do the following, > 1. Set /etc/ceph/ceph.conf to be able to connect to Ceph cluster > $ sudo cat /etc/ceph/ceph.conf > [global] > mon_host = 10.70.42.111:6789 Make sure that the Ceph 'admin' auth ID's keyring file is at the default location, e.g., /etc/ceph/ceph.client.admin.keyring Ganesha's librados client uses the 'admin' ID to fetch the export RADOS object. > > > 2. Create an export block and store it as a RADOS object in > CephFS's data pool (here named 'cephfs_data') > $ cat export.conf > EXPORT > { > EXPORT_ID = 100; > Path = /; > Pseudo = /; > Protocols = 4; > Transports = TCP; > Squash = No_root_squash; > FSAL { > Name = Ceph; > } > CLIENT { > Clients = 10.70.43.205; > Access_Type = rw; > } > } > $ sudo rados -p cephfs_data put ganesha export.conf > > 2. Setup ganesha.conf to allow Ganesha to read from RADOS object > $ sudo cat /etc/ganesha/ganesha.conf > %url rados://cephfs_data/ganesha > > 3. Start the NFS-Ganesha server. Ganesha server should now export > the Ceph File System to 10.70.43.205, IP of Ganesha host > $ sudo systemctl start nfs-ganesha > > 4. NFS mount the Ceph FileSystem and try accessing it > $ sudo mount.nfs4 10.70.43.205:/ /mnt/nfs4/ > $ cd /mnt/nfs4/ Testing Ganesha storing client recovery data in RADOS OMAP key-value
********************************************************************
Prerequisites
=============
* A Ceph cluster with a Ceph filesystem
* 'nfs-ganesha' and 'nfs-ganesha-ceph' packages installed in client machine
* Network connectivity between Ceph cluster and NFS-Ganesha machines
Assuming ganesha host IP is 10.70.43.205, and Ceph mon IP is 10.70.42.111
Steps
=====
In the ganesha host do the following,
1. Set /etc/ceph/ceph.conf to be able to connect to Ceph cluster
$ sudo cat /etc/ceph/ceph.conf
[global]
mon_host = 10.70.42.111:6789
2. Ensure that Ceph 'admin' auth ID is in the default location,
e.g., /etc/ceph/ceph.client.admin.keyring
3. Create an export block and store it as a RADOS object in
CephFS's data pool (here 'cephfs_data')
$ cat export.conf
EXPORT
{
EXPORT_ID = 100;
Path = /;
Pseudo = /;
Protocols = 4;
Transports = TCP;
Squash = No_root_squash;
FSAL {
Name = Ceph;
}
CLIENT {
Clients = 10.70.43.205;
Access_Type = rw;
}
}
$ sudo rados -p cephfs_data put ganesha export.conf
4. Setup ganesha.conf to allow Ganesha to read from RADOS object,
and store client IDs as RADOS OMAP key-values in CephFS's
data pool, 'cephfs_data'
$ cat /etc/ganesha/ganesha.conf
%url rados://cephfs_data/ganesha
NFSv4 {
RecoveryBackend = 'rados_kv';
}
RADOS_KV {
ceph_conf = '/etc/ceph/ceph.conf';
userid = 'admin';
pool = 'cephfs_data';
}
5. Start the NFS-Ganesha server. Ganesha server should now export
the Ceph File System to 10.70.43.205, IP of Ganesha host
$ sudo systemctl start nfs-ganesha
6. NFS mount the Ceph FileSystem and try accessing it
$ sudo mount.nfs4 10.70.43.205:/ /mnt/nfs4/
$ cd /mnt/nfs4/
7. Check for OMAP key values of objects with prefixes '_old' and '_recov'
in CephFS's data pool. NFS client IP, hostname is stored among them.
In the following case only in key-value of 'node0_recov' object.
$ sudo rados -p cephfs_data listomapvals node0_recov -
6480837291517411329
value (72 bytes) :
00000000 3a 3a 66 66 66 66 3a 31 30 2e 37 30 2e 34 33 2e |::ffff:10.70.43.|
00000010 32 30 35 2d 28 34 37 3a 4c 69 6e 75 78 20 4e 46 |205-(47:Linux NF|
00000020 53 76 34 2e 31 20 64 68 63 70 34 33 2d 32 30 35 |Sv4.1 dhcp43-205|
00000030 2e 6c 61 62 2e 65 6e 67 2e 62 6c 72 2e 72 65 64 |.lab.eng.blr.red|
00000040 68 61 74 2e 63 6f 6d 29 |hat.com)|
00000048
$ sudo rados -p cephfs_data listomapvals node0_old -
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3387 |