diff --git a/.docs/user-guide/config.md b/.docs/user-guide/config.md index f9160146..a0ef289e 100644 --- a/.docs/user-guide/config.md +++ b/.docs/user-guide/config.md @@ -588,6 +588,7 @@ remote storage systems. Currently the following storage drivers are supported: [VirtualBox](./storage-providers.md#virtualbox) | virtualbox [EBS](./storage-providers.md#aws-ebs) | ebs, ec2 [EFS](./storage-providers.md#aws-efs) | efs +[RBD](./storage-providers.md#ceph-rbd) | rbd ..more coming| The `libstorage.server.libstorage.storage.driver` property can be used to @@ -694,6 +695,7 @@ ScaleIO|Yes VirtualBox|Yes EBS|Yes EFS|No +RBD|No #### Ignore Used Count By default accounting takes place during operations that are performed diff --git a/.docs/user-guide/storage-providers.md b/.docs/user-guide/storage-providers.md index ffefa197..be875bee 100644 --- a/.docs/user-guide/storage-providers.md +++ b/.docs/user-guide/storage-providers.md @@ -517,3 +517,89 @@ libstorage: region: us-east-1 tag: test ``` +## Ceph RBD +The Ceph RBD driver registers a driver named `rbd` with the `libStorage` driver +manager and is used to connect and mount RADOS Block Devices from a Ceph +cluster. + +### Requirements + +* The `ceph` and `rbd` binary executables must be installed on the host +* The `rbd` kernel module must be installed +* A `ceph.conf` file must be present in its default location + (`/etc/ceph/ceph.conf`) +* The ceph `admin` key must be present in `/etc/ceph/` + +### Configuration +The following is an example with all possible fields configured. For a running +example see the `Examples` section. + +```yaml +rbd: + defaultPool: rbd +``` + +#### Configuration Notes + +* The `defaultPool` parameter is optional, and defaults to "rbd". When set, all + volume requests that do not reference a specific pool will use the + `defaultPool` value as the destination storage pool. + +### Runtime behavior + +The Ceph RBD driver only works when the client and server are on the same node. +There is no way for a centralized `libStorage` server to attach volumes to +clients, therefore the `libStorage` server must be running on each node that +wishes to mount RBD volumes. + +The RBD driver uses the format of `.` for the volume ID. This allows +for the use of multiple pools by the driver. During a volume create, if the +volume ID is given as `.`, a volume named *name* will be created in +the *pool* storage pool. If no pool is referenced, the `defaultPool` will be +used. + +When querying volumes, the driver will return all RBDs present in all pools in +the cluster, prefixing each volume with the appropriate `.` value. + +All RBD creates are done using the default 4MB object size, and using the +"layering" feature bit to ensure greatest compatibility with the kernel clients. + +### Activating the Driver +To activate the Ceph RBD driver please follow the instructions for +[activating storage drivers](./config.md#storage-drivers), using `rbd` as the +driver name. + +### Troubleshooting + +* Make sure that `ceph` and `rbd` commands work without extra parameters for + ID, key, and monitors. All configuration must come from `ceph.conf`. +* Check status of the ceph cluster with `ceph -s` command. + +### Examples + +Below is a full `config.yml` that works with RBD + +```yaml +libstorage: + server: + services: + rbd: + driver: rbd + rbd: + defaultPool: rbd +``` + +### Caveats + +* Snapshot and copy functionality is not yet implemented +* libStorage Server must be running on each host to mount/attach RBD volumes +* There is not yet options for using non-admin cephx keys or changing RBD create + features +* Volume pre-emption is not supported. Ceph does not provide a method to + forcefully detach a volume from a remote host -- only a host can attach and + detach volumes from itself. +* RBD advisory locks are not yet in use. A volume is returned as "unavailable" + if it has a watcher other than the requesting client. Until advisory locks are + in place, it may be possible for a client to attach a volume that is already + attached to another node. Mounting and writing to such a volume could lead to + data corruption. diff --git a/Makefile b/Makefile index 2ff2dd30..9f479323 100644 --- a/Makefile +++ b/Makefile @@ -12,6 +12,14 @@ BUILD_TAGS += libstorage_storage_driver \ endif endif +RBD_BUILD_TAGS := gofig \ + pflag \ + libstorage_integration_driver_docker \ + libstorage_storage_driver \ + libstorage_storage_driver_rbd \ + libstorage_storage_executor \ + libstorage_storage_executor_rbd + all: # if docker is running, then let's use docker to build it ifneq (,$(shell if docker version &> /dev/null; then echo -; fi)) @@ -1062,6 +1070,10 @@ test: test-debug: env LIBSTORAGE_DEBUG=true $(MAKE) test +test-rbd: + env BUILD_TAGS="$(RBD_BUILD_TAGS)" $(MAKE) deps + env BUILD_TAGS="$(RBD_BUILD_TAGS)" $(MAKE) ./drivers/storage/rbd/tests/rbd.test + clean: $(GO_CLEAN) clobber: clean $(GO_CLOBBER) diff --git a/drivers/storage/rbd/executor/rbd_executor.go b/drivers/storage/rbd/executor/rbd_executor.go new file mode 100644 index 00000000..7288bb17 --- /dev/null +++ b/drivers/storage/rbd/executor/rbd_executor.go @@ -0,0 +1,201 @@ +// +build !libstorage_storage_executor libstorage_storage_executor_rbd + +package executor + +import ( + "bufio" + "bytes" + "net" + "os/exec" + "strings" + + gofig "github.com/akutz/gofig/types" + "github.com/akutz/goof" + "github.com/akutz/gotil" + + "github.com/codedellemc/libstorage/api/registry" + "github.com/codedellemc/libstorage/api/types" + "github.com/codedellemc/libstorage/drivers/storage/rbd" + "github.com/codedellemc/libstorage/drivers/storage/rbd/utils" +) + +type driver struct { + config gofig.Config +} + +func init() { + registry.RegisterStorageExecutor(rbd.Name, newdriver) +} + +func newdriver() types.StorageExecutor { + return &driver{} +} + +func (d *driver) Init(context types.Context, config gofig.Config) error { + d.config = config + return nil +} + +func (d *driver) Name() string { + return rbd.Name +} + +func (d *driver) Supported( + ctx types.Context, + opts types.Store) (bool, error) { + + if !gotil.FileExistsInPath("ceph") { + return false, nil + } + + if !gotil.FileExistsInPath("rbd") { + return false, nil + } + + if !gotil.FileExistsInPath("ip") { + return false, nil + } + + if err := exec.Command("modprobe", "rbd").Run(); err != nil { + return false, nil + } + + return true, nil +} + +// NextDevice returns the next available device. +func (d *driver) NextDevice( + ctx types.Context, + opts types.Store) (string, error) { + + return "", types.ErrNotImplemented +} + +// LocalDevices returns a map of the system's local devices. +func (d *driver) LocalDevices( + ctx types.Context, + opts *types.LocalDevicesOpts) (*types.LocalDevices, error) { + + devMap, err := utils.GetMappedRBDs() + if err != nil { + return nil, err + } + + ld := &types.LocalDevices{Driver: d.Name()} + if len(devMap) > 0 { + ld.DeviceMap = devMap + } + + return ld, nil +} + +// InstanceID returns the local system's InstanceID. +func (d *driver) InstanceID( + ctx types.Context, + opts types.Store) (*types.InstanceID, error) { + + return GetInstanceID(nil, nil) +} + +// GetInstanceID returns the instance ID object +func GetInstanceID( + monIPs []net.IP, + localIntfs []net.Addr) (*types.InstanceID, error) { + /* Ceph doesn't have only one unique identifier per client, it can have + several. With the way the RBD driver is used, we will see multiple + identifiers used, and therefore returning any of those identifiers + is actually confusing rather than helpful. Instead, we use the client + IP address that is on the interface that can reach the monitors. + + We loop through all the monitor IPs, looking for a local interface + that is on the same L2 segment. If these all fail, We are on an L3 + segment so we grab the IP from the default route. + */ + + var err error + if nil == monIPs { + monIPs, err = getCephMonIPs() + if err != nil { + return nil, err + } + } + if len(monIPs) == 0 { + return nil, goof.New("No Ceph Monitors found") + } + + if nil == localIntfs { + localIntfs, err = net.InterfaceAddrs() + if err != nil { + return nil, err + } + } + + iid := &types.InstanceID{Driver: rbd.Name} + for _, intf := range localIntfs { + localIP, localNet, _ := net.ParseCIDR(intf.String()) + for _, monIP := range monIPs { + if localNet.Contains(monIP) { + // Monitor reachable over L2 + iid.ID = localIP.String() + return iid, nil + } + } + } + + // No luck finding L2 match, check for default/static route to monitor + localIP, err := getSrcIP(monIPs[0].String()) + if err != nil { + return nil, err + } + iid.ID = localIP + + return iid, nil +} + +func getCephMonIPs() ([]net.IP, error) { + out, err := exec.Command("ceph-conf", "--lookup", "mon_host").Output() + if err != nil { + return nil, goof.WithError("Unable to get Ceph monitors", err) + } + + monStrings := strings.Split(strings.TrimSpace(string(out)), ",") + + monIps := make([]net.IP, 0, 4) + + for _, mon := range monStrings { + ip := net.ParseIP(mon) + if ip != nil { + monIps = append(monIps, ip) + } else { + ipSlice, err := net.LookupIP(mon) + if err == nil { + monIps = append(monIps, ipSlice...) + } + } + } + + return monIps, nil +} + +func getSrcIP(destIP string) (string, error) { + out, err := exec.Command( + "ip", "-oneline", "route", "get", destIP).Output() + if err != nil { + return "", goof.WithError("Unable get IP routes", err) + } + + byteReader := bytes.NewReader(out) + scanner := bufio.NewScanner(byteReader) + scanner.Split(bufio.ScanWords) + found := false + for scanner.Scan() { + if !found { + if scanner.Text() == "src" { + found = true + continue + } + } + return scanner.Text(), nil + } + return "", goof.New("Unable to parse ip output") +} diff --git a/drivers/storage/rbd/rbd.go b/drivers/storage/rbd/rbd.go new file mode 100644 index 00000000..eb6621ec --- /dev/null +++ b/drivers/storage/rbd/rbd.go @@ -0,0 +1,23 @@ +// +build !libstorage_storage_driver libstorage_storage_driver_rbd + +package rbd + +import ( + gofigCore "github.com/akutz/gofig" + gofig "github.com/akutz/gofig/types" +) + +const ( + // Name is the name of the storage driver + Name = "rbd" +) + +func init() { + registerConfig() +} + +func registerConfig() { + r := gofigCore.NewRegistration("RBD") + r.Key(gofig.String, "", "rbd", "", "rbd.defaultPool") + gofigCore.Register(r) +} diff --git a/drivers/storage/rbd/storage/rbd_storage.go b/drivers/storage/rbd/storage/rbd_storage.go new file mode 100644 index 00000000..7045d997 --- /dev/null +++ b/drivers/storage/rbd/storage/rbd_storage.go @@ -0,0 +1,434 @@ +// +build !libstorage_storage_driver libstorage_storage_driver_rbd + +package storage + +import ( + "regexp" + + log "github.com/Sirupsen/logrus" + + gofig "github.com/akutz/gofig/types" + "github.com/akutz/goof" + + "github.com/codedellemc/libstorage/api/context" + "github.com/codedellemc/libstorage/api/registry" + "github.com/codedellemc/libstorage/api/types" + "github.com/codedellemc/libstorage/drivers/storage/rbd" + "github.com/codedellemc/libstorage/drivers/storage/rbd/utils" +) + +const ( + rbdDefaultOrder = 22 + bytesPerGiB = 1024 * 1024 * 1024 +) + +var ( + featureLayering = "layering" + defaultObjectSize = "4M" +) + +type driver struct { + config gofig.Config +} + +func init() { + registry.RegisterStorageDriver(rbd.Name, newDriver) +} + +func newDriver() types.StorageDriver { + return &driver{} +} + +func (d *driver) Name() string { + return rbd.Name +} + +// Init initializes the driver. +func (d *driver) Init(context types.Context, config gofig.Config) error { + d.config = config + log.Info("storage driver initialized") + return nil +} + +func (d *driver) Type(ctx types.Context) (types.StorageType, error) { + return types.Block, nil +} + +func (d *driver) NextDeviceInfo( + ctx types.Context) (*types.NextDeviceInfo, error) { + return nil, nil +} + +func (d *driver) InstanceInspect( + ctx types.Context, + opts types.Store) (*types.Instance, error) { + + iid := context.MustInstanceID(ctx) + return &types.Instance{ + InstanceID: iid, + }, nil +} + +func (d *driver) Volumes( + ctx types.Context, + opts *types.VolumesOpts) ([]*types.Volume, error) { + + // Get all Volumes in all pools + pools, err := utils.GetRadosPools() + if err != nil { + return nil, err + } + + var volumes []*types.Volume + + for _, pool := range pools { + images, err := utils.GetRBDImages(pool) + if err != nil { + return nil, err + } + + vols, err := d.toTypeVolumes( + ctx, images, opts.Attachments) + if err != nil { + /* Should we try to continue instead? */ + return nil, err + } + volumes = append(volumes, vols...) + } + + return volumes, nil +} + +func (d *driver) VolumeInspect( + ctx types.Context, + volumeID string, + opts *types.VolumeInspectOpts) (*types.Volume, error) { + + pool, image, err := d.parseVolumeID(&volumeID) + if err != nil { + return nil, err + } + + info, err := utils.GetRBDInfo(pool, image) + if err != nil { + return nil, err + } + + // no volume returned + if info == nil { + return nil, nil + } + + /* GetRBDInfo returns more details about an image than what we get back + from GetRBDImages. We could just use that and then grab the image we + want, but using GetRBDInfo() instead in case we ever want to send + back more detaild information from VolumeInspect() than Volumes() */ + images := []*utils.RBDImage{ + &utils.RBDImage{ + Name: info.Name, + Size: info.Size, + Pool: info.Pool, + }, + } + + vols, err := d.toTypeVolumes(ctx, images, opts.Attachments) + if err != nil { + return nil, err + } + + return vols[0], nil +} + +func (d *driver) VolumeCreate(ctx types.Context, volumeName string, + opts *types.VolumeCreateOpts) (*types.Volume, error) { + + fields := map[string]interface{}{ + "driverName": d.Name(), + "volumeName": volumeName, + "opts.size": *opts.Size, + } + + log.WithFields(fields).Debug("creating volume") + + pool, imageName, err := d.parseVolumeID(&volumeName) + if err != nil { + return nil, err + } + + info, err := utils.GetRBDInfo(pool, imageName) + if err != nil { + return nil, err + } + + // volume already exists + if info != nil { + return nil, goof.New("Volume already exists") + } + + //TODO: config options for order and features? + + features := []*string{&featureLayering} + + err = utils.RBDCreate( + pool, + imageName, + opts.Size, + &defaultObjectSize, + features, + ) + if err != nil { + return nil, goof.WithError("Failed to create new volume", err) + } + + volumeID := utils.GetVolumeID(pool, imageName) + return d.VolumeInspect(ctx, *volumeID, + &types.VolumeInspectOpts{ + Attachments: types.VolAttNone, + }, + ) +} + +func (d *driver) VolumeCreateFromSnapshot( + ctx types.Context, + snapshotID, volumeName string, + opts *types.VolumeCreateOpts) (*types.Volume, error) { + return nil, types.ErrNotImplemented +} + +func (d *driver) VolumeCopy( + ctx types.Context, + volumeID, volumeName string, + opts types.Store) (*types.Volume, error) { + return nil, types.ErrNotImplemented +} + +func (d *driver) VolumeSnapshot( + ctx types.Context, + volumeID, snapshotName string, + opts types.Store) (*types.Snapshot, error) { + return nil, types.ErrNotImplemented +} + +func (d *driver) VolumeRemove( + ctx types.Context, + volumeID string, + opts types.Store) error { + + fields := map[string]interface{}{ + "driverName": d.Name(), + "volumeID": volumeID, + } + + log.WithFields(fields).Debug("deleting volume") + + pool, imageName, err := d.parseVolumeID(&volumeID) + if err != nil { + return goof.WithError("Unable to set image name", err) + } + + err = utils.RBDRemove(pool, imageName) + if err != nil { + return goof.WithError("Error while deleting RBD image", err) + } + log.WithFields(fields).Debug("removed volume") + + return nil +} + +func (d *driver) VolumeAttach( + ctx types.Context, + volumeID string, + opts *types.VolumeAttachOpts) (*types.Volume, string, error) { + + fields := map[string]interface{}{ + "driverName": d.Name(), + "volumeID": volumeID, + } + + log.WithFields(fields).Debug("attaching volume") + + pool, imageName, err := d.parseVolumeID(&volumeID) + if err != nil { + return nil, "", goof.WithError("Unable to set image name", err) + } + + _, err = utils.RBDMap(pool, imageName) + if err != nil { + return nil, "", err + } + + vol, err := d.VolumeInspect(ctx, volumeID, + &types.VolumeInspectOpts{ + Attachments: types.VolAttReqTrue, + }, + ) + if err != nil { + return nil, "", err + } + + return vol, volumeID, nil +} + +func (d *driver) VolumeDetach( + ctx types.Context, + volumeID string, + opts *types.VolumeDetachOpts) (*types.Volume, error) { + + fields := map[string]interface{}{ + "driverName": d.Name(), + "volumeID": volumeID, + } + + log.WithFields(fields).Debug("detaching volume") + + // Can't rely on local devices header, so get local attachments + localAttachMap, err := utils.GetMappedRBDs() + if err != nil { + return nil, err + } + + dev, found := localAttachMap[volumeID] + if !found { + return nil, goof.New("Volume not attached") + } + + err = utils.RBDUnmap(&dev) + if err != nil { + return nil, goof.WithError("Unable to detach volume", err) + } + + return d.VolumeInspect( + ctx, volumeID, &types.VolumeInspectOpts{ + Attachments: types.VolAttReqTrue, + }, + ) +} + +func (d *driver) VolumeDetachAll( + ctx types.Context, + volumeID string, + opts types.Store) error { + return types.ErrNotImplemented +} + +func (d *driver) Snapshots( + ctx types.Context, + opts types.Store) ([]*types.Snapshot, error) { + return nil, types.ErrNotImplemented +} + +func (d *driver) SnapshotInspect( + ctx types.Context, + snapshotID string, + opts types.Store) (*types.Snapshot, error) { + return nil, types.ErrNotImplemented +} + +func (d *driver) SnapshotCopy( + ctx types.Context, + snapshotID, snapshotName, destinationID string, + opts types.Store) (*types.Snapshot, error) { + return nil, types.ErrNotImplemented +} + +func (d *driver) SnapshotRemove( + ctx types.Context, + snapshotID string, + opts types.Store) error { + return types.ErrNotImplemented +} + +func (d *driver) defaultPool() string { + return d.config.GetString("rbd.defaultPool") +} + +func (d *driver) toTypeVolumes( + ctx types.Context, + images []*utils.RBDImage, + getAttachments types.VolumeAttachmentsTypes) ([]*types.Volume, error) { + + lsVolumes := make([]*types.Volume, len(images)) + + var localAttachMap map[string]string + + // Even though this will be the same as LocalDevices header, we can't + // rely on that being present unless getAttachments.Devices is set + if getAttachments.Requested() { + var err error + localAttachMap, err = utils.GetMappedRBDs() + if err != nil { + return nil, err + } + } + + for i, image := range images { + rbdID := utils.GetVolumeID(&image.Pool, &image.Name) + lsVolume := &types.Volume{ + Name: image.Name, + ID: *rbdID, + Type: image.Pool, + Size: int64(image.Size / bytesPerGiB), + } + + if getAttachments.Requested() && localAttachMap != nil { + // Set volumeAttachmentState to Unknown, because this + // driver (currently) has no way of knowing if an image + // is attached anywhere else but to the caller + lsVolume.AttachmentState = types.VolumeAttachmentStateUnknown + var attachments []*types.VolumeAttachment + if _, found := localAttachMap[*rbdID]; found { + lsVolume.AttachmentState = types.VolumeAttached + attachment := &types.VolumeAttachment{ + VolumeID: *rbdID, + InstanceID: context.MustInstanceID(ctx), + } + if getAttachments.Devices() { + ld, ok := context.LocalDevices(ctx) + if ok { + attachment.DeviceName = ld.DeviceMap[*rbdID] + } else { + log.Warnf("Unable to get local device map for volume %s", *rbdID) + } + } + attachments = append(attachments, attachment) + lsVolume.Attachments = attachments + } else { + //Check if RBD has watchers to infer attachment + //to a different host + b, err := utils.RBDHasWatchers( + &image.Pool, &image.Name, + ) + if err == nil { + if b { + lsVolume.AttachmentState = types.VolumeUnavailable + } else { + lsVolume.AttachmentState = types.VolumeAvailable + } + } + } + } + lsVolumes[i] = lsVolume + } + + return lsVolumes, nil +} + +func (d *driver) parseVolumeID(name *string) (*string, *string, error) { + + // Look for . + re, _ := regexp.Compile(`^(\w+)\.(\w+)$`) + res := re.FindStringSubmatch(*name) + if len(res) == 3 { + // Name includes pool already + return &res[1], &res[2], nil + } + + // make sure is valid + re, _ = regexp.Compile(`^\w+$`) + if !re.MatchString(*name) { + return nil, nil, goof.New("Invalid VolumeID") + } + + pool := d.defaultPool() + return &pool, name, nil +} diff --git a/drivers/storage/rbd/tests/.gitignore b/drivers/storage/rbd/tests/.gitignore new file mode 100644 index 00000000..26dfec66 --- /dev/null +++ b/drivers/storage/rbd/tests/.gitignore @@ -0,0 +1,2 @@ +.vagrant +*.vdi diff --git a/drivers/storage/rbd/tests/README.md b/drivers/storage/rbd/tests/README.md new file mode 100644 index 00000000..b289c702 --- /dev/null +++ b/drivers/storage/rbd/tests/README.md @@ -0,0 +1,108 @@ +# RBD Driver Testing +This package includes two different kinds of tests for the RBD storage driver: + +Test Type | Description +----------|------------ +Unit/Integration | The unit/integration tests are provided and executed via the standard Go test pattern, in a file named `rbd_test.go`. These tests are designed to test the storage driver's and executor's functions at a low-level, ensuring, given the proper input, the expected output is received. +Test Execution Plan | The test execution plan operates above the code-level, using a Vagrantfile to deploy a complete implementation of the RBD storage driver in order to run real-world, end-to-end test scenarios. + +## Unit/Integration Tests +The unit/integration tests must be executed on a node that has access to a Ceph +cluster. In order to execute the tests either compile the test binary locally or +on the instance. From the root of the libStorage project execute the following: + +```bash +GOOS=linux make test-rbd +``` + +Once the test binary is compiled, if it was built locally, copy it to the node +where testing can occur + +Using an SSH session to connect to the node, the tests may now be executed as +root with the following command: + +```bash +sudo ./rbd.test +``` + +An exit code of `0` means the tests completed successfully. If there are errors +then it may be useful to run the tests once more with increased logging: + +```bash +sudo LIBSTORAGE_LOGGING_LEVEL=debug ./rbd.test -test.v +``` + +## Test Execution Plan +In addition to the low-level unit/integration tests, the RBD storage driver +provides a test execution plan automated with Vagrant: + +``` +vagrant up --provider=virtualbox +``` + +The above command brings up a Vagrant environment using VirtualBox virtual +machines in order to test the RBD driver. If the command completes successfully +then the environment was brought online without issue and indicates that the +test execution plan succeeded as well. + +The following sections outline dependencies, settings, and different execution +scenarios that may be required or useful for using the Vagrantfile. + +### Test Plan Dependencies +The following dependencies are required in order to execute the included test +execution plan: + + * [Vagrant](https://www.vagrantup.com/) 1.8.4+ + * [vagrant-hostmanager](https://github.com/devopsgroup-io/vagrant-hostmanager) + +Once Vagrant is installed the required plug-ins may be installed with the +following commands: + +```bash +vagrant plugin install vagrant-hostmanager +``` + +**NOTE**: The VMs contain the default Vagrant insecure SSH public key, such that +`vagrant ssh` works by default. However, the `ceph-admin` VM needs to be able to +SSH to the other VMs in order to configure Ceph via `ceph-deploy`. In order to +do this, the Vagrant SSH private key must be in your local SSH agent. The most +typical way to accomplish this on a nix-like machine is by running the command: + +``` +ssh-add ~/.vagrant.d/insecure_private_key +``` + +Configuration of the Ceph cluster will not work without this step. + +### Test Plan Nodes +The `Vagrantfile` deploys a Ceph cluster and two RBD/rexray clients named: + + * libstorage-rbd-test-server1 + * libstorage-rbd-test-server2 + * libstorage-ebs-test-admin + * libstorage-ebs-test-client + +The "admin" node is identical to client, except it is also use to configure +the Ceph cluster using `ceph-deploy`. + +### Test Plan Scripts +This package includes several test scripts that act as the primary executors +of the test plan: + + * `server-tests.sh` + * `client0-tests.sh` + * `client1-tests.sh` + +The above files are copied to their respective instances and executed +as soon as the instance is online. That means the `server-tests.sh` script is +executed before the client nodes are even online since the server node is +brought online first. + +### Test Plan Cleanup +Once the test plan has been executed, successfully or otherwise, it's important +to remember to clean up the VirtualBox resources that may have been created +along the way. To do so simply execute the following command: + +```bash +vagrant destroy -f +``` diff --git a/drivers/storage/rbd/tests/Vagrantfile b/drivers/storage/rbd/tests/Vagrantfile new file mode 100644 index 00000000..9f0389e7 --- /dev/null +++ b/drivers/storage/rbd/tests/Vagrantfile @@ -0,0 +1,151 @@ +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# Number of Ceph server nodes +server_nodes = 2 + +# /24 network to use for private network +network = "172.21.13" + +# Flags to control which REX-Ray to install +# Setting all options to false will use whatever rexray is already present +install_latest_stable_rex = true +install_latest_staged_rex = false +install_rex_from_source = false + +# Script to build rexray from source +$build_rexray = <