diff --git a/README.md b/README.md index 2724d64..af592d7 100644 --- a/README.md +++ b/README.md @@ -108,16 +108,15 @@ by Jenkins Ansible playbooks, so you do not need to do it manually. ### Automatically setup Jenkins master and slave(s) using Ansible playbooks -There are three Ansible playbooks: one that sets up a Jenkins master node, -and two that set up Jenkins slave nodes, either for building or executing -Build Verification Tests. Read the +There are two Ansible playbooks: one that sets up a Jenkins master node +and another one that sets up a Jenkins slave node. Read the [Ansible instructions](ansible/README.md) for details on how to execute the playbooks. -If you wish to have a single system hosting the entire Jenkins instance, the +If you wish to have a single system hosting the entire Jenkins instance, both playbooks can be executed in the same system. Execute first the Jenkins master playbook, stop Jenkins service (systemctl stop jenkins) and then execute the -other playbooks. +Jenkins slave playbook. Note: The Jenkins playbooks may fail due to network errors. If you see HTTP request errors, try executing them again. @@ -181,18 +180,6 @@ of the slave in IP_ADDRESS job parameter. The other job parameters values do not need to be modified. You should have already executed the Jenkins slave playbook(s) on those slaves. -##### Create BVT slave in Jenkins web UI - -The Build Verification Tests (BVT) use [Avocado](https://avocado-framework.github.io/) -to execute tests on virtual machines. The tests require passwordless access to -sudo, so it is recommended that a separate machine is used forthat purpose. It -may be a virtual machine, the tests will then run in a nested virtualized -guest. - -To create the slave node from the Jenkins UI, follow the same instructions -above, setting the IP_ADDRESS and, additionally, changing the NODE_LABEL to -"bvt_slave_label". - #### Create builds jobs When the credentials are configured, execute the seed job at diff --git a/ansible/README.md b/ansible/README.md index 3db1ff3..c515ba0 100644 --- a/ansible/README.md +++ b/ansible/README.md @@ -62,96 +62,5 @@ artifacts to the configured remote server. - Path to the SSH known keys for remote hosts, usually `~/.ssh/known_hosts`. The local file is just copied to remote target. -- `bvt-host.yaml` playbook - - - Paths to SSH private and public keys used to communicate between master - and slave nodes. - - Path to the SSH private key used to upload artifacts to the configured - remote server. - - Path to the SSH known keys for remote hosts, usually - `~/.ssh/known_hosts`. The local file is just copied to remote target. - Provide the data requested by the playbooks (e.g. Jenkins admin user name/password and SSH keys locations) and wait for automatic setup to finish. - -## Baremetal provisioning via Ansible - -The `bm-deploy.yaml` playbook allows the provisioning of a POWER -(baremetal) machine using PXE. A "controller node" that can serve DHCP -to the machine being provisioned is required. - -The deployment consists of two steps: - -### Preparation of the controller node - -This is a one time operation needed to prepare the machine that will -be used as controller for the provisioning. This machine should be in -the same network as the to-be provisioned (target) machine (this is -due to the requirement of serving DHCP to the target machine). Using a -virtual machine is OK. - -The controller node is referred to in the `hosts.ini` file as -`[baremetal-ctrl]`. - -Run once for each controller setup: - ``` - ansible-playbook -i hosts.ini --tags=setup bm-deploy.yaml - ``` - -After that, the controller node will be capable of serving files to -the target machine. - -For detailed info, see the [baremetal-ctrl](roles/baremetal-ctrl) role. - -### Deployment - -This step is executed each time a machine needs to be provisioned. It -uses IPMI to power the machine on/off, DHCP for setting up PXE and -HTTP for serving files. Services on the controller node are started on -demand. - -Prerequisites: - - - The variables file vars-baremetal.yaml should be edited with -information about the baremetal machine prior to execution of the -"deploy" tag. - - - A .iso file named after the MAC address of the network interface of -the target machine (_deploy.iso) is expected to be at -/tmp. - - ``` - cp /tmp/_deploy.iso - ``` - -Run every time a deploy is required: - ``` - ansible-playbook -i hosts.ini --tags=deploy bm-deploy.yaml - ``` - -#### Deployment flow - -The flow of the deploy after the user runs the playbook with the -"deploy" tag (C - controller node, B - baremetal node) is: - -- C: Mount ISO file inside the HTTP server directory -- C: Using IPMI, set machine to boot via network -- C: Using IPMI, turn the baremetal machine on -- C: Serve DHCP along with PXE configuration file location - - -- B: Boot and download PXE configuration file from HTTP server containing kickstart location and boot params -- B: Install via kickstart -- B: Run post script that adds authorized SSH keys and starts SSH server -- C: Detect that installation has finished by SSH server presence -- C: Fetch installed filesystem UUID -- C: Copy kernel/initramfs to the controller node -- C: Reconfigure PXE to boot the new installation - - -- C: Using IPMI, turn the machine off -- C: Using IPMI, turn the back machine on - -Deployment is finished and machine is accessible via SSH - -For detailed info, see the [deploy-baremetal](roles/deploy-baremetal) role. diff --git a/ansible/bm-deploy.yaml b/ansible/bm-deploy.yaml deleted file mode 100644 index eead3a1..0000000 --- a/ansible/bm-deploy.yaml +++ /dev/null @@ -1,11 +0,0 @@ ---- -- name: Deploy baremetal machine - hosts: baremetal-ctrl - roles: - - baremetal-ctrl - - deploy-baremetal - post_tasks: - - shell: echo -e "Node deployed:\n\tssh root@{{ baremetal.ip_address }}" - tags: deploy - vars_files: - - vars-baremetal.yaml diff --git a/ansible/bvt-host.yaml b/ansible/bvt-host.yaml deleted file mode 100644 index c083268..0000000 --- a/ansible/bvt-host.yaml +++ /dev/null @@ -1,30 +0,0 @@ ---- -- name: Setup host for Build Verification Tests - hosts: bvt-host - remote_user: root - roles: - - selinux - - time - - packages-jenkins-slave - - user - - ssh - - avocado - vars_files: - - vars-bvt.yaml - vars_prompt: - - name: "jenkins_private_ssh_key_file_path" - prompt: "Enter Jenkins private SSH key file path" - default: "~/.ssh/jenkins_id_rsa" - private: no - - name: "jenkins_public_ssh_key_file_path" - prompt: "Enter Jenkins public SSH key file path" - default: "~/.ssh/jenkins_id_rsa.pub" - private: no - - name: "upload_server_user_private_ssh_key_file_path" - prompt: "Enter upload server user's private SSH key file path" - default: "~/.ssh/open-power-host-os-builds-bot_id_rsa" - private: no - - name: "known_hosts_file_path" - prompt: "Enter path to file containing known keys for upload server host" - default: "~/.ssh/known_hosts" - private: no diff --git a/ansible/hosts.ini b/ansible/hosts.ini index 17dea9a..312ed7d 100644 --- a/ansible/hosts.ini +++ b/ansible/hosts.ini @@ -6,9 +6,3 @@ host-os-jenkins.example.com [jenkins-slave] host-os-jenkins-slave01.example.com host-os-jenkins-slave02.example.com - -[baremetal-ctrl] -host-os-jenkins-slave04.example.com - -[bvt-host] -host-os-bvt-host.example.com diff --git a/ansible/roles/avocado-repo/tasks/main.yaml b/ansible/roles/avocado-repo/tasks/main.yaml deleted file mode 100644 index 3e24cdf..0000000 --- a/ansible/roles/avocado-repo/tasks/main.yaml +++ /dev/null @@ -1,10 +0,0 @@ -- name: Setup avocado repository - get_url: - url: http://avocado-project.org/data/repos/avocado-el.repo - dest: /etc/yum.repos.d/avocado.repo - force: yes - owner: root - group: root - mode: 0644 - tags: - - setup diff --git a/ansible/roles/avocado/defaults/main.yaml b/ansible/roles/avocado/defaults/main.yaml deleted file mode 100644 index f0150e9..0000000 --- a/ansible/roles/avocado/defaults/main.yaml +++ /dev/null @@ -1,10 +0,0 @@ ---- -distro_packages: - - attr - - avocado - - avocado-plugins-vt - - libvirt - - policycoreutils-python - - qemu-kvm - - virt-install - - wget diff --git a/ansible/roles/avocado/files/host-os-bvt.ini b/ansible/roles/avocado/files/host-os-bvt.ini deleted file mode 100644 index f3ca80a..0000000 --- a/ansible/roles/avocado/files/host-os-bvt.ini +++ /dev/null @@ -1,10 +0,0 @@ -[provider] -uri: https://github.com/open-power-host-os/bvt.git -branch: master -[generic] -subdir: generic - -# A bug in avocado-vt prevents the tests from being listed without this -# https://github.com/avocado-framework/avocado-vt/issues/1222 -[multi_host_migration] -subdir: generic diff --git a/ansible/roles/avocado/handlers/main.yaml b/ansible/roles/avocado/handlers/main.yaml deleted file mode 100644 index 52354bc..0000000 --- a/ansible/roles/avocado/handlers/main.yaml +++ /dev/null @@ -1,11 +0,0 @@ ---- -- name: vt-install - command: python setup.py install - args: - chdir: "{{avocado_vt_repo_dir}}" - -- name: vt-bootstrap - command: avocado vt-bootstrap --vt-type {{item}} --vt-no-downloads --yes-to-all - with_items: - - qemu - - libvirt diff --git a/ansible/roles/avocado/meta/main.yaml b/ansible/roles/avocado/meta/main.yaml deleted file mode 100644 index 3edea1c..0000000 --- a/ansible/roles/avocado/meta/main.yaml +++ /dev/null @@ -1,5 +0,0 @@ ---- -dependencies: - - role: epel - - role: avocado-repo - - role: pkg_install diff --git a/ansible/roles/avocado/tasks/main.yaml b/ansible/roles/avocado/tasks/main.yaml deleted file mode 100644 index 19e944b..0000000 --- a/ansible/roles/avocado/tasks/main.yaml +++ /dev/null @@ -1,127 +0,0 @@ -- name: Make sure we have a 'wheel' group - group: - name: wheel - state: present - tags: - - setup - -- name: Allow 'wheel' group to have passwordless sudo - lineinfile: - dest: /etc/sudoers - state: present - regexp: '^%wheel' - line: '%wheel ALL=(ALL) NOPASSWD: ALL' - validate: 'visudo -cf %s' - tags: - - setup - -- name: Add sudoers users to wheel group - user: - name: "{{user_name}}" - groups: wheel - append: yes - state: present - createhome: yes - tags: - - setup - -- name: Configure SSH key to download from remote server - copy: - src={{upload_server_user_private_ssh_key_file_path}} - dest="{{user_home_dir}}/.ssh/upload_server_id_rsa" - owner={{user_name}} group={{user_name}} mode=0600 - tags: - - setup - -- name: Add known keys for remote hosts - copy: - src={{known_hosts_file_path}} dest="{{jenkins_home_dir}}/.ssh/known_hosts" - owner={{user_name}} group={{user_name}} - tags: - - setup - -# This is necessary to create virbr0 interface. You can create it -# manually, but libvirt does the job for you. -- name: Start libvirtd service - service: - name: libvirtd - state: started - enabled: yes - tags: - - setup - -# Those ports are used to provide kickstart files via HTTP -- name: Open ports in firewalld - firewalld: - port: "{{avocado_http_ports}}" - state: enabled - permanent: yes - immediate: yes - tags: - - setup - -- name: Creating avocado directories - file: - path: "{{item}}" - state: directory - owner: root - group: root - mode: 0755 - with_items: "{{avocado_dirs}}" - tags: - - setup - -- name: Clone avocado-vt - git: - repo: "{{avocado_vt_repo}}" - version: "{{avocado_vt_branch}}" - dest: "{{avocado_vt_repo_dir}}" - update: yes - notify: - - vt-install - - vt-bootstrap - tags: - - setup - -- name: Setup avocado configuration - template: - src: avocado.conf.j2 - dest: "{{avocado_conf}}" - owner: root - group: root - mode: 0644 - notify: vt-bootstrap - tags: - - setup - -- name: Setup test providers - copy: - src: host-os-bvt.ini - dest: "{{test_providers_dir}}/host-os-bvt.ini" - owner: root - group: root - mode: 0644 - notify: vt-bootstrap - tags: - - setup - -- name: Cloning test providers - git: - repo: "{{item.repo}}" - dest: "{{item.dest}}" - update: yes - notify: vt-bootstrap - with_items: - - "{{avocado_repos}}" - tags: - - setup - -- name: Install BVT script - copy: - src: "{{playbook_dir}}/../scripts/host-os-bvt.py" - dest: /usr/bin/host-os-bvt - owner: root - group: root - mode: 0555 - tags: - - setup diff --git a/ansible/roles/avocado/templates/avocado.conf.j2 b/ansible/roles/avocado/templates/avocado.conf.j2 deleted file mode 100644 index a3be3c4..0000000 --- a/ansible/roles/avocado/templates/avocado.conf.j2 +++ /dev/null @@ -1,11 +0,0 @@ -# {{avocado_conf}} - -[datadir.paths] -base_dir = {{avocado_dir}} -test_dir = {{avocado_test_dir}} -data_dir = {{avocado_data_dir}} -logs_dir = {{avocado_log_dir}} - -[sysinfo.collect] -enabled = True -profiler = True diff --git a/ansible/roles/baremetal-ctrl/defaults/main.yaml b/ansible/roles/baremetal-ctrl/defaults/main.yaml deleted file mode 100644 index 929abf9..0000000 --- a/ansible/roles/baremetal-ctrl/defaults/main.yaml +++ /dev/null @@ -1,11 +0,0 @@ ---- -http_server: - root: /srv/httpboot - port: 80 -distro_packages: - - python2-pip - # pyghmi dependency, pip crashes when installing it. - - python2-crypto - - rsync -python_packages: - - pyghmi diff --git a/ansible/roles/baremetal-ctrl/meta/main.yaml b/ansible/roles/baremetal-ctrl/meta/main.yaml deleted file mode 100644 index 35c5044..0000000 --- a/ansible/roles/baremetal-ctrl/meta/main.yaml +++ /dev/null @@ -1,6 +0,0 @@ ---- -dependencies: - - role: epel - - role: pkg_install - - role: nginx - - role: dnsmasq diff --git a/ansible/roles/baremetal-ctrl/tasks/main.yaml b/ansible/roles/baremetal-ctrl/tasks/main.yaml deleted file mode 100644 index ddab076..0000000 --- a/ansible/roles/baremetal-ctrl/tasks/main.yaml +++ /dev/null @@ -1,26 +0,0 @@ ---- -- name: Create HTTP server directory - file: - path: "{{ http_server.root }}" - owner: root - group: root - mode: 0755 - state: directory - tags: setup - -- name: Create PXE directory - file: - path: "{{ http_server.root }}/pxelinux.cfg" - owner: root - group: root - mode: 0755 - state: directory - tags: setup - -# Serves kernel/initrd/pxe files via HTTP -- name: Configure HTTP server - template: - src=bm-deploy.conf dest=/etc/nginx/conf.d/bm-deploy.conf - owner=root group=root mode=0644 - notify: restart nginx - tags: setup diff --git a/ansible/roles/baremetal-ctrl/templates/bm-deploy.conf b/ansible/roles/baremetal-ctrl/templates/bm-deploy.conf deleted file mode 100644 index 752aa57..0000000 --- a/ansible/roles/baremetal-ctrl/templates/bm-deploy.conf +++ /dev/null @@ -1,12 +0,0 @@ -# /etc/nginx/conf.d/bm-deploy.conf -# -# Installed by Ansible on {{ansible_date_time.date}} at {{ansible_date_time.time}}. -# WARNING: Any change to this file can be lost! - -server { - listen {{ http_server.port }}; - root {{ http_server.root }}; - location {{ http_server.root }} { - alias {{ http_server.root }}; - } -} diff --git a/ansible/roles/deploy-baremetal/defaults/main.yaml b/ansible/roles/deploy-baremetal/defaults/main.yaml deleted file mode 100644 index cbcc6ee..0000000 --- a/ansible/roles/deploy-baremetal/defaults/main.yaml +++ /dev/null @@ -1,5 +0,0 @@ ---- -machine_dir: "{{ http_server.root }}/{{ baremetal.mac_address }}" -deploy: - iso: /tmp/{{ baremetal.mac_address }}_deploy.iso - remove_iso: True diff --git a/ansible/roles/deploy-baremetal/tasks/boot.yaml b/ansible/roles/deploy-baremetal/tasks/boot.yaml deleted file mode 100644 index f6c79cc..0000000 --- a/ansible/roles/deploy-baremetal/tasks/boot.yaml +++ /dev/null @@ -1,39 +0,0 @@ -- name: Power machine off (post-deploy) - ipmi_power: - name: "{{ ipmi.ip_address }}" - user: "{{ ipmi.user }}" - password: "{{ ipmi.password }}" - state: off - timeout: 600 - -- name: Pause for some time while the service processor go on Standby - pause: - minutes: 1 - -# machines booting via PXE expect configuration files in some standard -# places like: -# pxelinux.cfg/01-ma-ca-dd-re-ss -# pxelinux.cfg/default -# more info: http://jk.ozlabs.org/blog/post/158/netbooting-petitboot/ -- name: Create PXE configuration file (disk boot) for the target node - template: - src: pxeconfig-disk - dest: "{{ http_server.root }}/pxelinux.cfg/01-{{ baremetal.mac_address | regex_replace(':', '-') }}" - owner: root - group: root - mode: 0644 - -- name: Power machine on (post-deploy) - ipmi_power: - name: "{{ ipmi.ip_address }}" - user: "{{ ipmi.user }}" - password: "{{ ipmi.password }}" - state: on - timeout: 600 - -- name: Wait for OS to load - wait_for: - host: "{{ baremetal.ip_address }}" - port: 22 - sleep: 120 - timeout: 1200 diff --git a/ansible/roles/deploy-baremetal/tasks/cleanup.yaml b/ansible/roles/deploy-baremetal/tasks/cleanup.yaml deleted file mode 100644 index 36d1bb7..0000000 --- a/ansible/roles/deploy-baremetal/tasks/cleanup.yaml +++ /dev/null @@ -1,27 +0,0 @@ ---- -- name: Unmount ISO file - mount: - name: "{{ machine_dir }}/mnt" - state: unmounted - -- name: Remove ISO file - file: - path: "{{ deploy.iso }}" - state: absent - when: deploy.remove_iso - -- name: Remove DHCP parameter for target node - file: - path: /etc/dnsmasq.conf/{{ baremetal.ip_address }} - state: absent - notify: restart dnsmasq - -- name: Remove PXE configuration file - file: - path: "{{ http_server.root }}/pxelinux.cfg/01-{{ baremetal.mac_address | regex_replace(':', '-') }}" - state: absent - -- name: Remove machine directory - file: - path: "{{ machine_dir }}" - state: absent diff --git a/ansible/roles/deploy-baremetal/tasks/install.yaml b/ansible/roles/deploy-baremetal/tasks/install.yaml deleted file mode 100644 index a567dbf..0000000 --- a/ansible/roles/deploy-baremetal/tasks/install.yaml +++ /dev/null @@ -1,34 +0,0 @@ ---- -- name: Set machine to boot from network - ipmi_boot: - name: "{{ ipmi.ip_address }}" - user: "{{ ipmi.user }}" - password: "{{ ipmi.password }}" - bootdev: network - -- name: Power machine off (pre-deploy) - ipmi_power: - name: "{{ ipmi.ip_address }}" - user: "{{ ipmi.user }}" - password: "{{ ipmi.password }}" - state: off - timeout: 600 - -- name: Pause for some time while the service processor go on Standby - pause: - minutes: 1 - -- name: Power machine on (pre-deploy) - ipmi_power: - name: "{{ ipmi.ip_address }}" - user: "{{ ipmi.user }}" - password: "{{ ipmi.password }}" - state: on - timeout: 600 - -- name: Wait for installation - wait_for: - host: "{{ baremetal.ip_address }}" - port: 22 - sleep: 120 - timeout: 1200 diff --git a/ansible/roles/deploy-baremetal/tasks/main.yaml b/ansible/roles/deploy-baremetal/tasks/main.yaml deleted file mode 100644 index 9e4ba81..0000000 --- a/ansible/roles/deploy-baremetal/tasks/main.yaml +++ /dev/null @@ -1,29 +0,0 @@ ---- -- block: - - name: Prepare controller node for deployment - include: pre_install.yaml - tags: - - deploy - - pre_install - - - name: Install OS - include: install.yaml - tags: - - deploy - - install - - - name: Run post-install tasks - include: post_install.yaml - tags: - - deploy - - post_install - - - name: Boot into installed system - include: boot.yaml - tags: - - deploy - - boot - always: - - name: Cleanup after deployment - include: cleanup.yaml - tags: cleanup diff --git a/ansible/roles/deploy-baremetal/tasks/post_install.yaml b/ansible/roles/deploy-baremetal/tasks/post_install.yaml deleted file mode 100644 index de2975f..0000000 --- a/ansible/roles/deploy-baremetal/tasks/post_install.yaml +++ /dev/null @@ -1,53 +0,0 @@ -- name: Gather facts from baremetal host - setup: - delegate_to: "{{ baremetal.ip_address }}" - -- name: Get installed file system UUID - set_fact: - installed_filesystem_uuid: "{{ item.uuid }}" - when: item.mount == "/mnt/sysimage" - with_items: "{{ ansible_mounts }}" - -- name: Get installed kernel and initramfs names - set_fact: - installed_kernel: "vmlinuz-{{ ansible_kernel }}" - installed_initramfs: "initramfs-{{ ansible_kernel }}.img" - -- name: Reset facts gathered - setup: - -- name: Get controller node SSH public key - slurp: - src: "{{ ansible_user_dir }}/.ssh/id_rsa.pub" - register: ctrl_node_ssh_pub_key - -- name: Authorize controller node SSH key in baremetal node - authorized_key: - user: root - manage_dir: yes - state: present - key: "{{ ctrl_node_ssh_pub_key['content'] | b64decode }}" - delegate_to: "{{ baremetal.ip_address }}" - -- name: Copy kernel/initramfs to controller node - shell: > - scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null - {{ baremetal.ip_address }}:/mnt/sysimage/boot/{{ item }} - {{ machine_dir }}/{{ item }} - with_items: - - "{{ installed_kernel }}" - - "{{ installed_initramfs }}" - -- name: Set initramfs permissions - file: - path: "{{ machine_dir }}/{{ installed_initramfs }}" - mode: 0644 - -- name: Authorize playbook user SSH key in baremetal node - authorized_key: - user: root - manage_dir: yes - state: present - key: "{{ playbook_node_ssh_pub_key }}" - path: /mnt/sysimage/root/.ssh/authorized_keys - delegate_to: "{{ baremetal.ip_address }}" diff --git a/ansible/roles/deploy-baremetal/tasks/pre_install.yaml b/ansible/roles/deploy-baremetal/tasks/pre_install.yaml deleted file mode 100644 index faec7ac..0000000 --- a/ansible/roles/deploy-baremetal/tasks/pre_install.yaml +++ /dev/null @@ -1,61 +0,0 @@ -- name: Allow DHCP and HTTP services in the firewall - firewalld: - service={{item}} state=enabled - permanent=False immediate=True - with_items: - - http - - dhcp - -- name: Define DHCP parameters for the target node - template: - src: dhcp-host - dest: /etc/dnsmasq.d/{{ baremetal.ip_address }} - owner: root - group: root - mode: 0644 - -# machines booting via PXE expect configuration files in some standard -# places like: -# pxelinux.cfg/01-ma-ca-dd-re-ss -# pxelinux.cfg/default -# more info: http://jk.ozlabs.org/blog/post/158/netbooting-petitboot/ -- name: Create PXE configuration file (network boot) for the target node - template: - src: pxeconfig-network - dest: "{{ http_server.root }}/pxelinux.cfg/01-{{ baremetal.mac_address | regex_replace(':', '-') }}" - owner: root - group: root - mode: 0644 - -- name: Create machine-specific directory for deployment - file: - path: "{{ machine_dir }}" - owner: root - group: root - mode: 0755 - state: directory - -- name: Get SSH public key - set_fact: - playbook_node_ssh_pub_key: "{{ lookup('file', ansible_user_dir + '/.ssh/id_rsa.pub') }}" - -- name: Make kickstart available in HTTP server - template: - src: kickstart - dest: "{{ machine_dir }}/install.ks" - owner: root - group: root - mode: 0644 - -- name: Mount ISO file - mount: - name: "{{ machine_dir }}/mnt" - src: "{{ deploy.iso }}" - state: mounted - fstype: iso9660 - -- name: Restart servers to apply configuration - service: name={{ item }} state=restarted - with_items: - - nginx - - dnsmasq diff --git a/ansible/roles/deploy-baremetal/templates/dhcp-host b/ansible/roles/deploy-baremetal/templates/dhcp-host deleted file mode 100644 index b754b1c..0000000 --- a/ansible/roles/deploy-baremetal/templates/dhcp-host +++ /dev/null @@ -1,4 +0,0 @@ -dhcp-range={{ baremetal.ip_address }},static,{{ ansible_default_ipv4.netmask }},infinite -dhcp-host={{ baremetal.mac_address }},{{ baremetal.ip_address }} -dhcp-option-force=210,http://{{ ansible_default_ipv4.address }}/ -dhcp-option=3,{{ ansible_default_ipv4.gateway }} diff --git a/ansible/roles/deploy-baremetal/templates/kickstart b/ansible/roles/deploy-baremetal/templates/kickstart deleted file mode 100644 index 2bcea05..0000000 --- a/ansible/roles/deploy-baremetal/templates/kickstart +++ /dev/null @@ -1,40 +0,0 @@ -# Created by Ansible on {{ansible_date_time.date}} at {{ansible_date_time.time}}. -text - -timezone --utc {{ baremetal.timezone }} -network --bootproto=dhcp -rootpw --iscrypted nopassword - -ignoredisk --only-use=disk/by-id/{{ baremetal.disk_serial }} -# bootloader --append="console=tty0" --location=mbr --timeout=1 --boot-drive=disk/by-id/{{ baremetal.disk_serial }} -zerombr -clearpart --all --initlabel -autopart --type=lvm - -%packages -openssh-server -%end - - -%post --nochroot --erroronfail - -# Setup SSH on the livecd so that ansible can gather facts about the -# installation -mkdir --mode=700 /root/.ssh/ -echo "{{ playbook_node_ssh_pub_key }}" > /root/.ssh/authorized_keys -chmod 644 /root/.ssh/authorized_keys - -# Create SSH server host keys -ssh-keygen -A - -# Allow network access so that ansible can copy the kernel/initramfs -# out to the controller node. -systemctl stop NetworkManager -echo "nameserver {{ ansible_default_ipv4.gateway }}" >> /etc/resolv.conf - -/usr/sbin/sshd -D -f /etc/ssh/sshd_config.anaconda & - -# Lock password so that the system is only accessible via SSH keys -passwd -d root -passwd -l root -%end diff --git a/ansible/roles/deploy-baremetal/templates/pxeconfig-disk b/ansible/roles/deploy-baremetal/templates/pxeconfig-disk deleted file mode 100644 index 8100b0d..0000000 --- a/ansible/roles/deploy-baremetal/templates/pxeconfig-disk +++ /dev/null @@ -1,6 +0,0 @@ -default Deployed Disk - -label Deployed Disk - kernel http://{{ ansible_default_ipv4.address }}/{{ baremetal.mac_address }}/{{ installed_kernel }} - initrd http://{{ ansible_default_ipv4.address }}/{{ baremetal.mac_address }}/{{ installed_initramfs }} - append root=UUID={{ installed_filesystem_uuid }} ro console=tty0 console=ttyS0 crashkernel=auto LANG=en_US.UTF-8 diff --git a/ansible/roles/deploy-baremetal/templates/pxeconfig-network b/ansible/roles/deploy-baremetal/templates/pxeconfig-network deleted file mode 100644 index 3eec013..0000000 --- a/ansible/roles/deploy-baremetal/templates/pxeconfig-network +++ /dev/null @@ -1,6 +0,0 @@ -default Automated Deployment - -label Automated Deployment - kernel http://{{ ansible_default_ipv4.address }}/{{ baremetal.mac_address }}/mnt/ppc/ppc64/vmlinuz - initrd http://{{ ansible_default_ipv4.address }}/{{ baremetal.mac_address }}/mnt/ppc/ppc64/initrd.img - append ip=lan0:dhcp::{{ baremetal.mac_address }} BOOTIF={{ baremetal.mac_address }} root=live:http://{{ ansible_default_ipv4.address }}/{{ baremetal.mac_address }}/mnt/LiveOS/squashfs.img repo=http://{{ ansible_default_ipv4.address }}/{{ baremetal.mac_address }}/mnt/ inst.ks=http://{{ ansible_default_ipv4.address }}/{{ baremetal.mac_address }}/install.ks diff --git a/ansible/vars-baremetal.yaml b/ansible/vars-baremetal.yaml deleted file mode 100644 index 975307f..0000000 --- a/ansible/vars-baremetal.yaml +++ /dev/null @@ -1,9 +0,0 @@ ---- -baremetal: - ip_address: - mac_address: - disk_serial: - timezone: -ipmi: - user: - ip_address: diff --git a/ansible/vars-bvt.yaml b/ansible/vars-bvt.yaml deleted file mode 100644 index 9fbff3b..0000000 --- a/ansible/vars-bvt.yaml +++ /dev/null @@ -1,46 +0,0 @@ ---- -user_name: jenkins-slave -user_groups: '' -user_home_dir: "/home/{{user_name}}" -jenkins_home_dir: "{{user_home_dir}}" - -avocado_dir: "{{ansible_env.HOME}}/avocado" - -avocado_data_dir: "{{avocado_dir}}/data" -avocado_iso_dir: "{{avocado_dir}}/isos" -avocado_log_dir: "{{avocado_dir}}/job-results" -avocado_test_dir: "{{avocado_dir}}/tests" - -avocado_conf_dir: "{{ansible_env.HOME}}/.config/avocado" -avocado_conf: "{{avocado_conf_dir}}/avocado.conf" - -avocado_vt_repo: "https://github.com/open-power-host-os/avocado-vt.git" -avocado_vt_branch: "master" -avocado_vt_repo_dir: "{{ansible_env.HOME}}/bvt/avocado-vt" - -test_providers_dir: "{{avocado_data_dir}}/avocado-vt/test-providers.d" -test_providers_downloads_dir: "{{test_providers_dir}}/downloads" - -avocado_dirs: - - "{{ansible_env.HOME}}" - - "{{avocado_dir}}" - - "{{avocado_data_dir}}" - - "{{avocado_iso_dir}}" - - "{{avocado_log_dir}}" - - "{{avocado_test_dir}}" - - "{{avocado_conf_dir}}" - - "{{avocado_vt_repo_dir}}" - - "{{test_providers_dir}}" - - "{{test_providers_downloads_dir}}" - -avocado_repos: - - { repo: "https://github.com/autotest/tp-libvirt.git", - dest: "{{test_providers_downloads_dir}}/io-github-autotest-libvirt" } - - { repo: "https://github.com/autotest/tp-qemu.git", - dest: "{{test_providers_downloads_dir}}/io-github-autotest-qemu" } - - { repo: "https://github.com/spiceqa/tp-spice.git", - dest: "{{test_providers_downloads_dir}}/io-github-spiceqa-spice" } - - { repo: "https://github.com/open-power-host-os/bvt.git", - dest: "{{test_providers_downloads_dir}}/host-os-bvt" } - -avocado_http_ports: 8000-8004/tcp diff --git a/pipeline/daily/stages.groovy b/pipeline/daily/stages.groovy index e6ab217..64846e5 100644 --- a/pipeline/daily/stages.groovy +++ b/pipeline/daily/stages.groovy @@ -186,36 +186,6 @@ python host_os.py \ } } -def runBVT() { - String PREVIOUS_YUM_REPO_FILE_URL = - "${RSYNC_URL_PREFIX}$params.UPLOAD_SERVER_PERIODIC_BUILDS_DIR_PATH/" + - "latest/hostos.repo" - String CURRENT_YUM_REPO_FILE_URL = - "${RSYNC_URL_PREFIX}$params.UPLOAD_SERVER_BUILDS_DIR_PATH/" + - "$buildStages.buildTimestamp/hostos.repo" - - previousConfigParameter = '' - try { - utils.rsyncDownload(PREVIOUS_YUM_REPO_FILE_URL, 'previous_host_os.repo') - previousConfigParameter = '--previous-yum-config-file previous_host_os.repo' - } catch (hudson.AbortException exception) { - echo('Previous build not found, update test will be skipped.') - } - - try { - utils.rsyncDownload(CURRENT_YUM_REPO_FILE_URL, 'current_host_os.repo') - - sh """\ -sudo host-os-bvt \ - --current-yum-config-file current_host_os.repo \ - $previousConfigParameter \ -""" - } catch (Exception exception) { - echo('BVT execution failed') - currentBuild.result = 'UNSTABLE' - } -} - def commitToGitRepo() { String GITHUB_BOT_HTTP_URL = "ssh://git@github/$params.GITHUB_BOT_USER_NAME" diff --git a/scripts/host-os-bvt.py b/scripts/host-os-bvt.py deleted file mode 100755 index 8012fd4..0000000 --- a/scripts/host-os-bvt.py +++ /dev/null @@ -1,226 +0,0 @@ -#!/bin/env python2 - -import argparse -import logging -import shlex -import shutil -import subprocess - -AVOCADO_RUN_COMMAND = ( - "/usr/bin/avocado run --vt-type {vt_type} --vt-guest-os {vt_guest_os} " - "--failfast on {tests}") -# TODO: Parameterize those values -VT_TYPE = "libvirt" -VT_GUEST_OS = "CentOS.7.4" - -IMPORT_GUEST_TEST = ("io-github-autotest-qemu.unattended_install.import.import" - ".default_install.aio_native") -REMOVE_GUEST_TEST = "io-github-autotest-libvirt.remove_guest.without_disk" -INSTALL_BASE_OS_TEST = ("io-github-autotest-qemu.unattended_install.url.http_ks" - ".default_install.aio_native") -UPDATE_BASE_OS_TEST = "io-github-autotest-qemu.yum_update" -INSTALL_HOST_OS_TEST = "host-os-bvt.yum_install" -UPDATE_HOST_OS_TEST = "host-os-bvt.yum_update" - -# TODO: Parameterize those values -DISK_PATH = "/root/avocado/data/avocado-vt/images/centos74-ppc64le.qcow2" -DISK_BACKUP_PATH = DISK_PATH + ".backup" - -LOGGING_FORMAT = "%(message)s" -logging.basicConfig(level=logging.DEBUG, format=LOGGING_FORMAT) - - -def parse_cli_options(): - """ - Parse CLI options - - Returns: - Namespace: CLI options. Valid attributes: previous_yum_config_file, - current_yum_config_file. - """ - - parser = argparse.ArgumentParser() - parser.add_argument("--current-yum-config-file", - help="yum configuration file pointing to the current " - "Host OS repository", - default="host_os.repo") - parser.add_argument("--previous-yum-config-file", - help="yum configuration file pointing to a previous " - "Host OS repository", - default=None) - args = parser.parse_args() - return args - - -def get_vt_extra_params(parameter_values): - """ - Get a string to be appended to the 'avocado run' command setting - extra parameters for avocado-vt. - - Args: - parameter_values (dict): dictionary mapping parameter names to - their values - """ - vt_extra_params = " --vt-extra-params" - for name, value in parameter_values.items(): - vt_extra_params += " {name}={value}".format(**locals()) - return vt_extra_params - - -def execute_avocado_command(tests, vt_extra_parameter_values=None): - """ - Execute the tests using the Avocado framework. - - Args: - tests ([str]): name of the tests to be executed - vt_extra_parameter_values (dict): dictionary mapping avocado-vt - parameter names to their values - """ - cmd = AVOCADO_RUN_COMMAND.format( - vt_type=VT_TYPE, vt_guest_os=VT_GUEST_OS, tests=" ".join(tests)) - if vt_extra_parameter_values: - cmd += get_vt_extra_params(vt_extra_parameter_values) - - logging.debug("Executing: " + cmd) - subprocess.check_call(shlex.split(cmd), stderr=subprocess.STDOUT) - - -def update_guest(): - """ - Create a libvirt domain from an existing disk image and updates - the system using yum. - """ - logging.info("Updating an existing guest") - tests = [IMPORT_GUEST_TEST, UPDATE_BASE_OS_TEST] - extra_params = dict( - restore_image_after_testing=False, - yum_update_timeout=1800) - execute_avocado_command(tests, extra_params) - - -def install_guest(): - """ - Create a libvirt domain, installs the OS in an empty disk image and - updates the system using yum. - """ - logging.info("Installing the OS and updating a new guest") - tests = [INSTALL_BASE_OS_TEST, UPDATE_BASE_OS_TEST] - extra_params = dict( - restore_image_after_testing=False, - install_timeout=9000, - yum_update_timeout=1800) - execute_avocado_command(tests, extra_params) - - -def install_host_os(yum_config_file_path): - """ - Execute the test that installs the Host OS packages on top of a - working system. - - Args: - yum_config_file_path (str): path to a yum configuration file - pointing to a repository containing Host OS packages - """ - logging.info("Installing Host OS packages") - tests = [INSTALL_HOST_OS_TEST] - extra_params = dict( - restore_image_after_testing=False, - yum_install_timeout=1800, - yum_config_file_path=yum_config_file_path) - execute_avocado_command(tests, extra_params) - - -def update_host_os(yum_install_config_file_path, yum_update_config_file_path): - """ - Execute the test that installs the Host OS packages on top of a - working system and updates them to more recent version. - - Args: - yum_install_config_file_path (str): path to a yum configuration - file pointing to a repository containing outdated Host OS - packages - yum_update_config_file_path (str): path to a yum configuration - file pointing to a repository containing updated Host OS - packages - """ - logging.info("Updating a system with Host OS") - tests = [UPDATE_HOST_OS_TEST] - extra_params = dict( - restore_image_after_testing=False, - yum_install_timeout=1800, - yum_install_config_file_path=yum_install_config_file_path, - yum_update_timeout=1200, - yum_update_config_file_path=yum_update_config_file_path) - execute_avocado_command(tests, extra_params) - - -def remove_guest(): - """ - Remove a libvirt domain, leaving its disk image as is. - """ - logging.info("Removing the guest") - tests = [REMOVE_GUEST_TEST] - execute_avocado_command(tests) - - -def backup_disk_image(): - """ - Back up the disk image to a predefined path expected by avocado-vt. - """ - logging.info("Backing up disk image") - logging.debug("Copying from '{source}' to '{dest}'".format( - source=DISK_PATH, dest=DISK_BACKUP_PATH)) - shutil.copy(DISK_PATH, DISK_BACKUP_PATH) - - -def restore_disk_image(): - """ - Restore a previously backed up disk image. - """ - logging.info("Restoring disk backup") - logging.debug("Copying from '{source}' to '{dest}'".format( - source=DISK_BACKUP_PATH, dest=DISK_PATH)) - shutil.copy(DISK_BACKUP_PATH, DISK_PATH) - - -def execute_bvt(current_yum_config_file, previous_yum_config_file=None): - """ - Prepare a guest with the base OS and execute the Host OS install and - update tests on top of it. - - Args: - current_yum_config_file (str): path to a yum configuration - file pointing to a repository containing updated Host OS - packages - previous_yum_config_file (str): path to a yum configuration - file pointing to a repository containing outdated Host OS - packages - """ - try: - try: - update_guest() - except subprocess.CalledProcessError as error: - logging.warning(error) - try: - remove_guest() - except subprocess.CalledProcessError as remove_error: - logging.debug(remove_error) - install_guest() - backup_disk_image() - install_host_os(current_yum_config_file) - restore_disk_image() - if previous_yum_config_file: - update_host_os(previous_yum_config_file, current_yum_config_file) - restore_disk_image() - else: - logging.info( - "No previous repository provided, skipping update test") - except: - raise - finally: - remove_guest() - - -if __name__ == '__main__': - args = parse_cli_options() - execute_bvt(args.current_yum_config_file, args.previous_yum_config_file)