Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encrypted Pools #102

Merged
merged 10 commits into from
Jun 11, 2020
21 changes: 21 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,27 @@ Accepted values are: `linear`, `striped`, `raid0`, `raid1`, `raid4`, `raid5`, `r
This is a list of volumes that belong to the current pool. It follows the
same pattern as the `storage_volumes` variable, explained below.

##### `encryption`
This specifies whether or not the pool will be encrypted using LUKS.
__WARNING__: Toggling encryption for a pool is a destructive operation, meaning
the pool itself will be removed as part of the process of
adding/removing the encryption layer.

##### `encryption_passphrase`
This string specifies a passphrase used to unlock/open the LUKS volume(s).

##### `encryption_key_file`
This string specifies the full path to the key file used to unlock the LUKS volume(s).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

path on the control node or on the managed host?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything is on the managed host AFAIK. Any operations to get the key file to the managed nodes is on the user. Maybe I should note that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sergio-correia the LUKS parameters are needlessly different from the ones used in your role. https://github.com/linux-system-roles/nbde_client/pull/3/files#diff-7eeda618087b49ae876084ab6c73fdbbR9 uses pass and keyfile. Here one uses encryption_passphrase and encryption_key_file. Those names should be consistent. (I don't care which one is chosen.)

I will adopt encryption_passphrase and encryption_key_file. These seem more explicit.

Copy link
Member

@pcahyna pcahyna Jun 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can use just passphrase and key_file. In your role it is obvious that it is encryption related (it is an encryption role). I was concerned about the gratuitous pass vs passphrase difference or keyfile vs. key_file.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dwlehman would it make sense to have key_file on the control node? nbde_client uses a keyfile on the control node: linux-system-roles/nbde_client#3 (comment) . Are the use cases different enough to justify this difference in semantics?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that it is more convenient for users and the consistency between roles is good.

Where on the managed node should we place the key files?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where on the managed node should we place the key files?

@sergio-correia what do you do in nbde_client?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I define an internal variable __nbde_client_managed_dir and have a task to copy the key files there and do the cleanup later. I pass this variable also to the python module, so it knows where to find the key files. It's currently defined as /var/run/linux-system-roles.nbde_client

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For our purposes I think we need to keep the key file around for reboots, while I (think that I) see that you are removing them in the nbde_client role after setting up nbde. For us to specify that they be on the controller necessitates developing a scheme for storing them on the managed nodes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, for us it's simpler that we need it to do some operations, then we don't need it anymore and can remove it.


##### `encryption_cipher`
This string specifies a non-default cipher to be used by LUKS.

##### `encryption_key_size`
This integer specifies the LUKS key size (in bytes).

##### `encryption_luks_version`
This integer specifies the LUKS version to use.


#### `storage_volumes`
The `storage_volumes` variable is a list of volumes to manage. Each volume has the following
Expand Down
9 changes: 9 additions & 0 deletions defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,15 @@ storage_pool_defaults:
state: "present"
type: lvm

encryption: false
encryption_passphrase: null
encryption_key_file: null
encryption_cipher: null
encryption_key_size: null
encryption_luks_version: null

raid_level: null

storage_volume_defaults:
state: "present"
type: lvm
Expand Down
152 changes: 98 additions & 54 deletions library/blivet.py
Original file line number Diff line number Diff line change
Expand Up @@ -148,15 +148,72 @@ class BlivetAnsibleError(Exception):
pass


class BlivetVolume(object):
class BlivetBase(object):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you need unit tests for these changes? if so, they can be added later after the 15th

blivet_device_class = None
_type = None

def __init__(self, blivet_obj, volume, bpool=None):
def __init__(self, blivet_obj, spec_dict):
self._blivet = blivet_obj
self._volume = volume
self._blivet_pool = bpool
self._spec_dict = spec_dict
self._device = None

def _manage_one_encryption(self, device):
ret = device
# Make sure to handle adjusting both existing stacks and future stacks.
if device == device.raw_device and self._spec_dict['encryption']:
# add luks
luks_name = "luks-%s" % device._name
if not device.format.exists:
fmt = device.format
else:
fmt = get_format(None)

self._blivet.format_device(device,
get_format("luks",
name=luks_name,
cipher=self._spec_dict.get('encryption_cipher'),
key_size=self._spec_dict.get('encryption_key_size'),
luks_version=self._spec_dict.get('encryption_luks_version'),
passphrase=self._spec_dict.get('encryption_passphrase') or None,
key_file=self._spec_dict.get('encryption_key_file') or None))

if not device.format.has_key:
raise BlivetAnsibleError("encrypted %s '%s' missing key/passphrase" % (self._type, self._spec_dict['name']))

luks_device = devices.LUKSDevice(luks_name,
fmt=fmt,
parents=[device])
self._blivet.create_device(luks_device)
ret = luks_device
elif device != device.raw_device and not self._spec_dict['encryption']:
# remove luks
if not device.format.exists:
fmt = device.format
else:
fmt = get_format(None)

ret = self._device.raw_device
self._blivet.destroy_device(device)
if fmt.type is not None:
self._blivet.format_device(ret, fmt)

# XXX: blivet has to store cipher, key_size, luks_version for existing before we
# can support re-encrypting based on changes to those parameters

return ret


class BlivetVolume(BlivetBase):
_type = "volume"

def __init__(self, blivet_obj, volume, bpool=None):
super(BlivetVolume, self).__init__(blivet_obj, volume)
self._blivet_pool = bpool

@property
def _volume(self):
return self._spec_dict

@property
def required_packages(self):
packages = list()
Expand Down Expand Up @@ -243,46 +300,7 @@ def _destroy(self):
self._blivet.devicetree.recursive_remove(self._device.raw_device)

def _manage_encryption(self):
# Make sure to handle adjusting both existing stacks and future stacks.
if self._device == self._device.raw_device and self._volume['encryption']:
# add luks
luks_name = "luks-%s" % self._device._name
if not self._device.format.exists:
fmt = self._device.format
else:
fmt = get_format(None)

self._blivet.format_device(self._device,
get_format("luks",
name=luks_name,
cipher=self._volume.get('encryption_cipher'),
key_size=self._volume.get('encryption_key_size'),
luks_version=self._volume.get('encryption_luks_version'),
passphrase=self._volume.get('encryption_passphrase') or None,
key_file=self._volume.get('encryption_key_file') or None))

if not self._device.format.has_key:
raise BlivetAnsibleError("encrypted volume '%s' missing key/passphrase" % self._volume['name'])

luks_device = devices.LUKSDevice(luks_name,
fmt=fmt,
parents=[self._device])
self._blivet.create_device(luks_device)
self._device = luks_device
elif self._device != self._device.raw_device and not self._volume['encryption']:
# remove luks
if not self._device.format.exists:
fmt = self._device.format
else:
fmt = get_format(None)

self._device = self._device.raw_device
self._blivet.destroy_device(self._device.children[0])
if fmt.type is not None:
self._blivet.format_device(self._device, fmt)

# XXX: blivet has to store cipher, key_size, luks_version for existing before we
# can support re-encrypting based on changes to those parameters
self._device = self._manage_one_encryption(self._device)

def _resize(self):
""" Schedule actions as needed to ensure the device has the desired size. """
Expand Down Expand Up @@ -462,16 +480,18 @@ def _get_blivet_volume(blivet_obj, volume, bpool=None):
return _BLIVET_VOLUME_TYPES[volume_type](blivet_obj, volume, bpool=bpool)


class BlivetPool(object):
blivet_device_class = None
class BlivetPool(BlivetBase):
_type = "pool"

def __init__(self, blivet_obj, pool):
self._blivet = blivet_obj
self._pool = pool
self._device = None
super(BlivetPool, self).__init__(blivet_obj, pool)
self._disks = list()
self._blivet_volumes = list()

@property
def _pool(self):
return self._spec_dict

@property
def required_packages(self):
packages = list()
Expand All @@ -489,6 +509,9 @@ def ultimately_present(self):
def _is_raid(self):
return self._pool.get('raid_level') not in [None, "null", ""]

def _member_management_is_destructive(self):
return False

def _create(self):
""" Schedule actions as needed to ensure the pool exists. """
pass
Expand All @@ -515,6 +538,8 @@ def _destroy(self):

leaves = [a for a in ancestors if a.isleaf]

self._device = None

def _type_check(self): # pylint: disable=no-self-use
return True

Expand Down Expand Up @@ -624,10 +649,12 @@ def manage(self):
self._look_up_device()

# schedule destroy if appropriate, including member type change
if not self.ultimately_present: # TODO: member type changes
self._manage_volumes()
if not self.ultimately_present or self._member_management_is_destructive():
if not self.ultimately_present:
self._manage_volumes()
self._destroy()
return
if not self.ultimately_present:
return

# schedule create if appropriate
self._create()
Expand Down Expand Up @@ -660,19 +687,36 @@ class BlivetLVMPool(BlivetPool):
def _type_check(self):
return self._device.type == "lvmvg"

def _member_management_is_destructive(self):
if self._device is None:
return False

if self._pool['encryption'] and not all(m.encrypted for m in self._device.parents):
return True
elif not self._pool['encryption'] and any(m.encrypted for m in self._device.parents):
return True

return False

def _get_format(self):
fmt = get_format("lvmpv")
if not fmt.supported or not fmt.formattable:
raise BlivetAnsibleError("required tools for managing LVM are missing")

return fmt

def _manage_encryption(self, members):
managed_members = list()
for member in members:
managed_members.append(self._manage_one_encryption(member))

return managed_members

def _create(self):
if self._device:
return

members = self._create_members()

members = self._manage_encryption(self._create_members())
try:
pool_device = self._blivet.new_vg(name=self._pool['name'], parents=members)
except Exception as e:
Expand Down
23 changes: 23 additions & 0 deletions library/blockdev_info.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,12 +31,32 @@
type: dict
'''

import os
import shlex

from ansible.module_utils.basic import AnsibleModule


LSBLK_DEVICE_TYPES = {"part": "partition"}
DEV_MD_DIR = '/dev/md'


def fixup_md_path(path):
if not path.startswith("/dev/md"):
return path

if not os.path.exists(DEV_MD_DIR):
return path

ret = path
for md in os.listdir(DEV_MD_DIR):
md_path = "%s/%s" % (DEV_MD_DIR, md)
if os.path.realpath(md_path) == os.path.realpath(path):
ret = md_path
break

return ret


def get_block_info(run_cmd):
buf = run_cmd(["lsblk", "-o", "NAME,FSTYPE,LABEL,UUID,TYPE", "-p", "-P", "-a"])[1]
Expand All @@ -50,6 +70,9 @@ def get_block_info(run_cmd):
print(pair)
raise
if key:
if key.lower() == "name":
value = fixup_md_path(value)

dev[key.lower()] = LSBLK_DEVICE_TYPES.get(value, value)
if dev:
info[dev['name']] = dev
Expand Down
57 changes: 57 additions & 0 deletions tests/test-verify-pool-members.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
- set_fact:
_storage_test_pool_pvs_lvm: "{{ ansible_lvm.pvs|dict2items(key_name='path', value_name='info')|json_query('[?info.vg==`\"{}\"`].path'.format(storage_test_pool.name))|list }}"
_storage_test_pool_pvs: []
_storage_test_expected_pv_count: "{{ 0 if storage_test_pool.state == 'absent' else (storage_test_pool.raid_level | ternary(1, storage_test_pool.disks|length)) }}"
when: storage_test_pool.type == 'lvm'

- name: Get the canonical device path for each member device
resolve_blockdev:
spec: "{{ pv }}"
loop: "{{ _storage_test_pool_pvs_lvm }}"
loop_control:
loop_var: pv
register: pv_paths
when: storage_test_pool.type == 'lvm'

- set_fact:
_storage_test_pool_pvs: "{{ _storage_test_pool_pvs }} + [ '{{ pv_paths.results[idx].device }}' ]"
loop: "{{ _storage_test_pool_pvs_lvm }}"
loop_control:
index_var: idx
when: storage_test_pool.type == 'lvm'

- name: Verify PV count
assert:
that: "{{ ansible_lvm.pvs|dict2items|json_query('[?value.vg==`\"{}\"`]'.format(storage_test_pool.name))|length == _storage_test_expected_pv_count|int }}"
msg: "Unexpected PV count for pool {{ storage_test_pool.name }}"
when: storage_test_pool.type == 'lvm'

- set_fact:
_storage_test_expected_pv_type: "{{ 'crypt' if storage_test_pool.encryption else 'disk' }}"
when: storage_test_pool.type == 'lvm'

- set_fact:
_storage_test_expected_pv_type: "{{ 'partition' if storage_use_partitions|default(false) else 'disk' }}"
when: storage_test_pool.type == 'lvm' and not storage_test_pool.encryption

- set_fact:
_storage_test_expected_pv_type: "{{ storage_test_pool.raid_level }}"
when: storage_test_pool.type == 'lvm' and storage_test_pool.raid_level

- name: Check the type of each PV
assert:
that: "{{ storage_test_blkinfo.info[pv]['type'] == _storage_test_expected_pv_type }}"
msg: "Incorrect type for PV {{ pv }} in pool {{ storage_test_pool.name }}"
loop: "{{ _storage_test_pool_pvs }}"
loop_control:
loop_var: pv
when: storage_test_pool.type == 'lvm'

- name: Check member encryption
include_tasks: verify-pool-members-encryption.yml

- set_fact:
_storage_test_expected_pv_type: null
_storage_test_expected_pv_count: null
_storage_test_pool_pvs_lvm: []
_storage_test_pool_pvs: []
21 changes: 11 additions & 10 deletions tests/test-verify-pool.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,21 @@
# Verify the pool configuration.
#
- set_fact:
_storage_pool_tests: ['name', 'type', 'size', 'members'] # members: disks, types
_storage_pool_tests: ['members']
# future:
# name
# type
# size
# members:
# encryption
# disks
# raid
# compression
# deduplication

#
# Verify the pool's volumes are configured correctly.
#
- name: Verify the volumes in this pool were correctly managed
include_tasks: "test-verify-volume.yml"
loop: "{{ storage_test_pool.volumes }}"
- name:
include_tasks: "test-verify-pool-{{ storage_test_pool_subset }}.yml"
loop: "{{ _storage_pool_tests }}"
loop_control:
loop_var: storage_test_volume
when: storage_test_pool is defined and storage_test_pool.volumes | length > 0
loop_var: storage_test_pool_subset


Loading