Bug 1854177 - RFE: Auto unlock support for boot partition in Grub2 [NEEDINFO]
Summary: RFE: Auto unlock support for boot partition in Grub2
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: grub2
Version: 9.1
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: beta
: 9.1
Assignee: Bootloader engineering team
QA Contact: Release Test Team
URL:
Whiteboard:
Depends On:
Blocks: IoT
TreeView+ depends on / blocked
 
Reported: 2020-07-06 16:19 UTC by shkhisti
Modified: 2022-11-01 07:28 UTC (History)
25 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-11-01 07:28:50 UTC
Type: Feature Request
Target Upstream Version:
ribarry: needinfo? (mthacker)
eterrell: needinfo? (dgilbert)


Attachments (Terms of Use)

Description shkhisti 2020-07-06 16:19:41 UTC
With more and more customers deploying Linux workloads to Cloud environments, one of the top asks from the customers is they need to encrypt both their root and boot partition for their VM’s deployed in cloud. 
With these workloads being deployed in cloud, one of the top requirements is management of virtual machines at cloud scale. There should not be any manual intervention like entering passphrase. Customers fully encrypted VM’s should be able unlock the partitions automatically. 
 
This bug is to propose possible solutions which can be implemented in the GRUB to facilitate these asks. 
 
Proposed options:
 
Option 1: Using External key protector. 
There are couple of sub options that can be explored
 
•	Passing encryption key as and EFI variable: 
In this option, cloud host environment will create a boot time UEFI variable. And it will pass the disk encryption key in this variable. 
Grub can be configured to read from pre defined EFI variable. So when grub comes up, it will read the EFI variable and use the value to unlock the boot partition. 
 
•	Grub can look for attached PMEM device to read the encryption key and use it to unlock boot and root partitions. 
For this option, cloud host environment will attach a PMEM device to  guest vm, format it to fat32 file system and put encryption key as a file inside this PMEM device.
Grub will have to evaluate all the devices attached to the VM, identify PMEM / external device which contains the unlock key and use it to open the boot partition. 
 
Option 2: Using TPM based protector in grub to unlock the partitions. 
In this option, cloud host environment will inject encryption key inside guest VM’s virtual TPM at the boot time. Grub can then use standard TPM commands to either unseal the key 
using  standard TPM command interface.  With TPM based key protector various options can be explored on hierarchy under which key needs to be loaded, auth policy to unseal the secret etc. 
Using TPM based protector and PCR based policies will provide guarantee that compromised boot loader will not be able to unlock the partition.

Comment 2 Javier Martinez Canillas 2020-08-17 12:31:44 UTC
(In reply to shkhisti from comment #0)
> With more and more customers deploying Linux workloads to Cloud
> environments, one of the top asks from the customers is they need to encrypt
> both their root and boot partition for their VM’s deployed in cloud. 
> With these workloads being deployed in cloud, one of the top requirements is
> management of virtual machines at cloud scale. There should not be any
> manual intervention like entering passphrase. Customers fully encrypted VM’s
> should be able unlock the partitions automatically. 
>  
> This bug is to propose possible solutions which can be implemented in the
> GRUB to facilitate these asks. 
>  
> Proposed options:
>

I think we want a solution that won't be vendor specific. So probably we should
have this discussion in the GRUB development mailing list [0] to explore the
possible approaches.

> Option 1: Using External key protector. 
> There are couple of sub options that can be explored
>
> •	Passing encryption key as and EFI variable: 
> In this option, cloud host environment will create a boot time UEFI
> variable. And it will pass the disk encryption key in this variable. 
> Grub can be configured to read from pre defined EFI variable. So when grub
> comes up, it will read the EFI variable and use the value to unlock the boot
> partition. 
>

This option would be the easier to implement since GRUB already has support to
read EFI variables. But it does seem to me that would be more of workaround.

> •	Grub can look for attached PMEM device to read the encryption key and use
> it to unlock boot and root partitions. 
> For this option, cloud host environment will attach a PMEM device to  guest
> vm, format it to fat32 file system and put encryption key as a file inside
> this PMEM device.
> Grub will have to evaluate all the devices attached to the VM, identify PMEM
> / external device which contains the unlock key and use it to open the boot
> partition. 
>

I don't know what is the support for PMEM devices in GRUB. From a quick look
in the sources it seems that there's no support but maybe I'm wrong on this.

> Option 2: Using TPM based protector in grub to unlock the partitions. 
> In this option, cloud host environment will inject encryption key inside
> guest VM’s virtual TPM at the boot time. Grub can then use standard TPM
> commands to either unseal the key 
> using  standard TPM command interface.  With TPM based key protector various
> options can be explored on hierarchy under which key needs to be loaded,
> auth policy to unseal the secret etc. 
> Using TPM based protector and PCR based policies will provide guarantee that
> compromised boot loader will not be able to unlock the partition.

I think this would be the preferred option. The challenge is that there isn't
any support in GRUB for TPM commands. It only has support to do measurements,
extend PCRs and log events using the EFI_TCG2_PROTOCOL.HashLogExtendEvent
function defined in the TCG EFI Protocol Specification [1]).
 
To support unsealing a loaded TPM sealed object, the needed TPM commands have
to be implemented (i.e: TPM2_CreatePrimary, TPM2_Load, TPM2_Unseal, etc).

This could be done using the EFI_TCG2_PROTOCOL.SubmitCommand function. The
tpm2-tcti-uefi project [2] exists that already do this and it might be added
to GRUB. But it will still require quite a lot of work to do this.

Regardless of the option used to store the key and pass it to GRUB, I think
that there should be a lot of thought on how to make the solution portable
across different vendors as mentioned before and how the key should be stored.

For example, the clevis project [3] that allows to automatically unlock an
encrypted root partition uses the JSON Object Signing and Encryption (JOSE)
[4] standard, where the metadata is stored either in a LUKSMeta [5] header
(for LUKSv1) or the LUKS header itself (for LUKSv2).

The LUKS volume contains a JSON Web Encryption (JWE) [6] that has information
on how to retrieve a JSON Web Key (JWK) [7] and the actual LUKS key encrypted
with the referenced JWK.

Currently clevis doesn't support an encrypted boot partition because the code
to unlock the LUKS volumes is executed in the initramfs. But if this could be
moved to GRUB, then it could also support the full disk encryption case.

[0]: https://lists.gnu.org/mailman/listinfo/grub-devel
[1]: https://trustedcomputinggroup.org/wp-content/uploads/EFI-Protocol-Specification-rev13-160330final.pdf
[2]: https://github.com/tpm2-software/tpm2-tcti-uefi
[3]: https://github.com/latchset/clevis
[4]: https://tools.ietf.org/html/rfc7520
[5]: https://github.com/latchset/luksmeta
[6]: https://tools.ietf.org/html/rfc7516
[7]: https://tools.ietf.org/html/rfc7517

Comment 3 Mario Limonciello 2020-08-20 19:10:24 UTC
>Currently clevis doesn't support an encrypted boot partition because the code
to unlock the LUKS volumes is executed in the initramfs. But if this could be
moved to GRUB, then it could also support the full disk encryption case.

As another dimension to add to this discussion, the other approach that comes to mind for me is to
0) remove GRUB from the equation in this FDE scenario?
1) build the initramfs on a build server, sign it, and distribute it with the kernel.  

If you take this approach, I would argue the value of an encrypted boot partition is actually obliviated.  At that point you're running (effectively static) signed binaries from the boot partition.

Comment 4 shkhisti 2020-08-24 19:05:49 UTC
>As another dimension to add to this discussion, the other approach that comes to mind for me is to
>0) remove GRUB from the equation in this FDE scenario?
>1) build the initramfs on a build server, sign it, and distribute it with the kernel.  

While this approach sounds interesting, I do see few questions
1. We have unencrypted grub configuration file in the boot partition which needs to be measured. 
2. What will be the process to update initramfs. As initramfs is updated locally how can it be signed after updating it from the VM ?
3. Is initramfs also measured ?  Will it be signed by redhat signing certificate 
4. Which component will do signature check on initramds ?
5. How can we guarantee that encryption key is tied to boot integrity?

Comment 5 Mario Limonciello 2020-08-24 19:23:43 UTC
>1. We have unencrypted grub configuration file in the boot partition which needs to be measured. 

If GRUB stays in the picture, the GRUB binary already measures this into PCR8.

>2. What will be the process to update initramfs. As initramfs is updated locally how can it be signed after updating it from the VM ?

For this to work and be safe, the initramfs needs to be signed on a build server.  So it turns into a question of what objects need to be placed into the initramfs.  Can a "one-size fits all" approach work?  If not - why?

>3. Is initramfs also measured ?  Will it be signed by redhat signing certificate 

My thought is that it should be signed by RH signing certificate yes.

>4. Which component will do signature check on initramds ?

I think this would be a work item for either the kernel or bootloader to verify the signature.

>5. How can we guarantee that encryption key is tied to boot integrity?

First step in the chain should be UEFI secure boot.  
* SHIM, GRUB, kernel and initramfs only the signed versions would be loadable.

Next I would think it is best to be bound to a TPM PCR (or combination of PCRs).  If using Clevis (as described in #2) an encrypted version of it can be stored in the LUKS metadata, and then only released when those PCRs are valid.
* Initramfs would use bound PCR values to decrypt the key and mount and pivot to the rootfs.

Comment 6 Javier Martinez Canillas 2020-08-27 10:05:46 UTC
Thanks a lot Mario for your feedback and comments.

While I agree with you that ideally a static initramfs image should be shipped so
it can be signed and its TPM measure known in advance for a PCR authorized policy,
as shkhisti mentioned we are currently re-building the initramfs locally because:

a) any package can install a dracut module
b) the generated initramfs is tailored to the local machine and only contains the
   needed kernel modules (besides the rescue image that's generated with no-hostonly).

This imposes some challenges and that's why I think that GRUB would need to support
unlocking an encrypted LUKS volume for the case where an unsigned initramfs image is
located in the encrypted partition.

All OSTree-base variants already generate the initramfs image in the server and ship
it as a part of an OSTree deployment commit. So we are exploring to align the other
variants to do the same but is something that's unlikely to happen in the short term.

Comment 7 Mario Limonciello 2020-08-27 13:53:11 UTC
I acknowledge that dynamic initramfs generation is currently status quo, but I feel this is the right time to question "Why"?  Is it just historical flexibility?   Dracut was originally developed at a time that signing binaries and measuring the boot process was not around.

I have a feeling that taking a good look at the objects from your (a) and (b) statement is a fixed number of packages and modules that are normally inserted into the initramfs.  Admittedly I haven't tried, but I would bet it's a minimal performance impact (< 1s) to just insert all the "possibly" used modules into the initramfs.  That trade off you get for such a simpler measured boot process, I have to think it's worth weighing it out.

>All OSTree-base variants already generate the initramfs image in the server and ship
it as a part of an OSTree deployment commit. So we are exploring to align the other
variants to do the same but is something that's unlikely to happen in the short term.

Yeah I agree trying to do this as part of this RFE is likely to be challenging, so perhaps you may consider it for a longer term goal.

Comment 9 Mark Thacker 2020-08-31 16:58:40 UTC
From what I can tell, this isn't a simple RFE and would definitely affect multiple linux distributions because grub2 itself would be modified.
We have other customers who have wanted something similar.
Today, as Javier notes, we do offer a Clevis client solution that allows LUKS encrypted volumes to be automatically unlocked and can do so in a way that depends on the presence of a Trusted Platform Module (TPM). 

However, Clevis is all about managing LUKS keys and passing them into dmcrypt during the mount process.
This doesn't immediately help with the actual boot volume, which, as noted, is currently unencrypted.

This will require some upstream coordination before being brought into RHEL itself.

Comment 12 Javier Martinez Canillas 2020-12-09 15:57:13 UTC
I will summarize the status of this RFE.

There are two work streams that are needed for this feature:

1) Support in GRUB to seal a LUKS key using a TPM2

2) LUKS support in GRUB, including LUKSv2

For (1), there hasn't been any progress in upstream GRUB and is unlikely that
we would be able to do it in the short term.

For (2), we already have LUKSv1 support in the RHEL 8 GRUB version (2.02), but
some needed modules were not included in the signed EFI binary. So LUKSv1 was
not working when Secure Boot was enabled. It was fixed for 8.4 in bug #1873725.

Support for LUKSv2 is already in GRUB upstream and will be included in the
next GRUB 2.06 release. We might try to backport LUKSv2 support to RHEL 8 but
that's something that won't happen in 8.4.

Comment 13 Mark Heslin 2021-02-03 20:12:15 UTC
Peter, Javier, Eden, Shirang - as per last week's eng call:

 1. I've added Eden, Shirang to this BZ
 2. Eden will drive the upstream GRUB module work with guidance from RH
 3. Peter, Javier can you outline for Eden, Shirang the scope of work that is required?

Comment 18 RHEL Program Management 2022-01-06 07:27:02 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 27 RHEL Program Management 2022-11-01 07:28:50 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.