Bug 1781184

Summary: [RFE]Support LUKS + Clevis/Tang configuration from gluster-ansible roles
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Gobinda Das <godas>
Component: rhhiAssignee: Gobinda Das <godas>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact:
Priority: high    
Version: rhhiv-1.8CC: dwalveka, jcall, pasik, rhs-bugs, sabose
Target Milestone: ---Keywords: FutureFeature
Target Release: RHHI-V 1.8   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
At-rest encryption using Network-Bound Disk Encryption is now supported on Red Hat Hyperconverged Infrastructure for Virtualization.
Story Points: ---
Clone Of:
: 1781187 1781189 (view as bug list) Environment:
Last Closed: 2020-08-04 14:50:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1781187, 1826283    
Bug Blocks: 1741792, 1750302, 1779976    

Description Gobinda Das 2019-12-09 13:47:22 UTC
Description of problem:
 Currently cockpit does not support LUKS + Clevis/Tang configuration during deployment.We need a way to configure LUKS + Clevis/Tang from cockpit 

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Gobinda Das 2020-01-28 03:51:43 UTC
For RHHI-V 1.8 we are going to support only ansible role to achieve this i.e. user needs to run ansible playbook manually prior to deployment.Once device encrypted the device name will be changed to luks_<device name> ex: luks_sdb which means from cockpit user needs to chance device name in brick tab to /dev/mapper/luks_<device name>

Comment 3 Gobinda Das 2020-02-24 10:17:04 UTC
Integrate Luks/clevis with Tang using ansible roles:
> Cretate inventory file: ansible-vault create luks_tang-inventory.yml
> Configure luks device and tang server: ansible-playbook -i luks_tang_inventory.yml /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/luks_tang_setup.yml --tags luksencrypt,bindtang --ask-vault-pass

The inventory file will be encrypted as it's contains sensitive data for devices.
  Or you can create inventory file first and then encrypt like: ansible-vault encrypt luks_tang_inventory.yml 

This RFE consists of two operations which can be executed indivisually using ansible tags (luksencrypt and bindtang):
 1- Encrypt devices:
    This is responsible to encrypt the provided devices and add into crypttab to auto unlock devices. 
 2- Bind Tang servers:
    This is responsible to bind your tang servers with clevis to unlock automatically the OS disk during reboot.

Command to setup both: ansible-playbook -i inventory.yml luks_tang_setup.yml --tags luksencrypt,bindtang --ask-vault-pass

This RFE also has impact on Day2 operations (Expand Cluster and Expand Volume):
1- Expand Cluster:
   User needs to run ansible playbook manually prior to expand cluster by using --tags luksencrypt as here we don't have to bind tang server again.

2- User needs to run ansible playbook manually prior to expand volume by using --tags luksencrypt, as first we have to encrypt device the create volume.

Comment 4 SATHEESARAN 2020-04-14 07:57:21 UTC
Tested with gluster-ansible-infra-1.0.4-7

The inventory file for NBDE is available - /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/luks_tang_inventory.yml

1. Tang server is created and the service is up and running
2. when adding the required values and executing the playbook with the command:
# ansible-playbook -i luks_tang_inventory.yml /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/luks_tang_setup.yml --tags blacklistdevices,luksencrypt,bindtang --ask-vault-pass

This resulted in encrypting additional disks and also binding the root disk with tang server.

Comment 9 errata-xmlrpc 2020-08-04 14:50:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHHI for Virtualization 1.8 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3314