Skip to content

Importing the changes made to be able to deploy MN. #44

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 102 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
102 commits
Select commit Hold shift + click to select a range
645e3a2
Additions to enable Azimuth appliance support.
MaxBed4d Jan 3, 2024
5b8bd80
Small typo amendment.
MaxBed4d Jan 3, 2024
18e5d28
Updated usage template for multinode appliance.
MaxBed4d Jan 4, 2024
7bb2cd9
Added 'roles' path to ansible.cfg.
MaxBed4d Jan 4, 2024
e97a5bd
Moved ansible.cfg to repo root directory.
MaxBed4d Jan 4, 2024
d4f9786
Added a requirements yaml file to repo root.
MaxBed4d Jan 4, 2024
0fec8b3
Roles directory typo fix in config.
MaxBed4d Jan 4, 2024
644d46f
Changed ansible collection version.
MaxBed4d Jan 4, 2024
93a2a2f
Added symlink of requirements.
MaxBed4d Jan 4, 2024
b15ebf1
Deleted link to create one after.
MaxBed4d Jan 4, 2024
0921854
Symlink added.
MaxBed4d Jan 4, 2024
fd1e533
Added Group Vars to maybe fix the inability to find roles.
MaxBed4d Jan 5, 2024
baa2521
Change roles directory name.
MaxBed4d Jan 5, 2024
2f36b84
Added templates so that gateway_ips can be provided.
MaxBed4d Jan 5, 2024
c29c28c
Changed few tf files and 'template' to submit correct data.
MaxBed4d Jan 5, 2024
63c8f54
Trying to copy all tf files to the appropriate location.
MaxBed4d Jan 5, 2024
4351735
Added TF Variables to use to build.
MaxBed4d Jan 5, 2024
6a0f55e
Check current playbook directory.
MaxBed4d Jan 5, 2024
bacaa2f
remove versions.tf
MaxBed4d Jan 5, 2024
f72a422
Edited backend type and vars.
MaxBed4d Jan 5, 2024
7403414
Locate playbook directory.
MaxBed4d Jan 5, 2024
f666fed
Change directory for project.
MaxBed4d Jan 5, 2024
1e1bdbe
Remove backend type.
MaxBed4d Jan 5, 2024
e6969d2
Typo fix.
MaxBed4d Jan 5, 2024
73e4f9e
Change tfvars to j2 temp.
MaxBed4d Jan 5, 2024
d834bb6
SSH gen alternative method.
MaxBed4d Jan 5, 2024
6a3cea8
SSH alt method 2.
MaxBed4d Jan 5, 2024
608881f
Update cluster_gateway_ip output variable.
MaxBed4d Jan 8, 2024
b4096ed
Include cluster_nodes variable in output.
MaxBed4d Jan 8, 2024
4fa6cbb
Remove backend from cluster_nodes variable.
MaxBed4d Jan 8, 2024
d934190
Amend variable call.
MaxBed4d Jan 8, 2024
3d3324e
Changed variables being provided to cluster-nodes.
MaxBed4d Jan 8, 2024
6f921ca
Test change to cluster_nodes variable name.
MaxBed4d Jan 8, 2024
09be066
Remove cluster_nodes concat var.
MaxBed4d Jan 8, 2024
ac40ab6
Formatting amendment.
MaxBed4d Jan 8, 2024
ff3f864
Concat the list of cluster_nodes.
MaxBed4d Jan 8, 2024
a75a201
Alter cluster_nodes variables.
MaxBed4d Jan 8, 2024
f1c84f9
create join list to save a loop.
MaxBed4d Jan 8, 2024
5514ce2
amend typo
MaxBed4d Jan 8, 2024
ebd2ae4
Change 'join' formatting.
MaxBed4d Jan 8, 2024
a3bb3e4
Created for loop for cluster_nodes definition.
MaxBed4d Jan 8, 2024
a1585d5
removed fact for autherisation.
MaxBed4d Jan 8, 2024
fbb5968
Remove index notation for IP.
MaxBed4d Jan 8, 2024
ca48124
Changed backend type to a variable.
MaxBed4d Jan 8, 2024
acd6d1c
Added azimuth ssh key.
MaxBed4d Jan 8, 2024
0fa648b
Commented out ssh key gen.
MaxBed4d Jan 8, 2024
a6b6306
Change from deploy to user key.
MaxBed4d Jan 8, 2024
76672b8
Set ssh deploy key to be equal to the user ssh key.
MaxBed4d Jan 8, 2024
c087cb7
Pass multiple ssh keys.
MaxBed4d Jan 8, 2024
1de8778
Amend comment to be able to delete instance.
MaxBed4d Jan 8, 2024
8248634
Converted userdata into a template for ssh keys.
MaxBed4d Jan 8, 2024
5aea5ab
Amend directory typo.
MaxBed4d Jan 8, 2024
5bdbcdb
Comment out ssh key copy.
MaxBed4d Jan 8, 2024
0094547
Create and add ansible ssh key so it can run in runner.
MaxBed4d Jan 9, 2024
021d3eb
Correct variable output.
MaxBed4d Jan 9, 2024
4c7f736
Configure the inventory and install ansible galaxy.
MaxBed4d Jan 9, 2024
06fdbd7
Run command through localhost.
MaxBed4d Jan 9, 2024
3657300
Merge requirements.
MaxBed4d Jan 9, 2024
6a1c393
Move ssh var key definition to main playbook.
MaxBed4d Jan 9, 2024
68159d2
Edit and remove nested template expressions.
MaxBed4d Jan 9, 2024
dd8874c
Make ssh variables for all hosts.
MaxBed4d Jan 9, 2024
e0c9720
SSH Key setup for Multinode Ansible.
MaxBed4d Jan 9, 2024
b15fccb
Variable removal amendment.
MaxBed4d Jan 9, 2024
4f89a78
Changed MN flavour and ssh user username.
MaxBed4d Jan 9, 2024
c0f733a
Link some variables back to the previous directory.
MaxBed4d Jan 9, 2024
ea11995
Fix symlink
MaxBed4d Jan 9, 2024
4fab3af
Remove symlinks.
MaxBed4d Jan 9, 2024
a3e7b2d
add ansible_user to vars.
MaxBed4d Jan 9, 2024
57756dd
Variable set with quote marks.
MaxBed4d Jan 9, 2024
bde16cd
Giving a host to playbook.
MaxBed4d Jan 9, 2024
ab7e903
Create block for tasks.
MaxBed4d Jan 9, 2024
c8ac249
Comment out task test.
MaxBed4d Jan 9, 2024
b0f2932
Debug Groups variable.
MaxBed4d Jan 9, 2024
98d6872
Test new group structure.
MaxBed4d Jan 9, 2024
17c0ca2
Tupple list amend.
MaxBed4d Jan 9, 2024
69846b4
Add command line playbook deployment.
MaxBed4d Jan 9, 2024
1dce1b5
Amend indentations
MaxBed4d Jan 9, 2024
879e67d
Provide Terraform Vars for playbook.
MaxBed4d Jan 9, 2024
d9d60bd
Changed output and converted resources into cluster_nodes output.
MaxBed4d Jan 10, 2024
8dc1d58
Amend playbook vars.
MaxBed4d Jan 10, 2024
fa6fffc
Change to import playbook.
MaxBed4d Jan 10, 2024
c8ba8d0
Install ansible.posix
MaxBed4d Jan 10, 2024
c0dff50
Amended playbook for installing ansible galaxy requirements.
MaxBed4d Jan 10, 2024
ceabfe2
Remove Ansible-galaxy install as it should be done by the requirements.
MaxBed4d Jan 10, 2024
a40845e
This is a combination of 5 commits.
MaxBed4d Jan 10, 2024
5a7c94b
No Wazuh deploy.
MaxBed4d Jan 17, 2024
4519eab
Create infrastructure only option.
MaxBed4d Jan 18, 2024
232ae60
Checkout the main ansible folder so that these changes are solely foc…
MaxBed4d Jan 18, 2024
6fa2086
Create a second App UI to deploy just the infrastructure as a test.
MaxBed4d Jan 19, 2024
00ab6fb
Update meta UI for Infrastructure deployment.
MaxBed4d Jan 22, 2024
6bedaf8
Update UI to allow Openstack version select.
MaxBed4d Jan 23, 2024
775b7f1
Try to allow custom input.
MaxBed4d Jan 24, 2024
305fc34
UI Changes
MaxBed4d Jan 24, 2024
cee87a5
Given user choice over image.
MaxBed4d Jan 24, 2024
1cddc5f
Change the way the ssh command is provided.
MaxBed4d Jan 24, 2024
acf6248
set ssh user after automatically.
MaxBed4d Jan 24, 2024
f987bd8
Change name of app.yaml
MaxBed4d Jan 24, 2024
257fdce
Fix ssh user declaration.
MaxBed4d Jan 24, 2024
a8bec13
Improved UI with ssh username input.
MaxBed4d Jan 25, 2024
6b8eb6f
Change description pipe symbol to have multiline outputs.
MaxBed4d Jan 26, 2024
b257275
Tidy up of the code for the infrastructure deployment.
MaxBed4d Jan 26, 2024
f649ba7
Importing the changes made to be able to deploy MN.
MaxBed4d Jan 26, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions ansible.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
[defaults]
stdout_callback = yaml
callbacks_enabled = timer, profile_tasks, profile_roles
host_key_checking = False
pipelining = True
forks = 30
deprecation_warnings=False
roles_path = roles

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
1 change: 1 addition & 0 deletions ansible/ansible.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ host_key_checking = False
pipelining = True
forks = 30
deprecation_warnings=False
roles_path = ../roles

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
4 changes: 2 additions & 2 deletions ansible/fix-homedir-ownership.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
- name: Fix Home Directory Ownership
hosts: all
hosts: all, !localhost
gather_facts: false
vars:
# At the time of running this playbook the home directory is not owned by the user.
Expand All @@ -16,7 +16,7 @@
gather_subset:
- user_dir

- name: Ensure homedir is owned by {{ ansible_user }}
- name: Ensure homedir is owned by "{{ ansible_user }}"
ansible.builtin.file:
dest: "{{ ansible_env.HOME }}"
state: directory
Expand Down
8 changes: 3 additions & 5 deletions ansible/vars/defaults.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@
src_directory: "{{ ansible_env.HOME }}/src"

kayobe_config_repo: https://github.com/stackhpc/stackhpc-kayobe-config.git
kayobe_config_version: stackhpc/yoga
kayobe_config_version: "{{ kayobe_config_branch | default('stackhpc/yoga')}}"
kayobe_config_name: kayobe-config
kayobe_config_environment: ci-multinode

kayobe_repo: https://github.com/stackhpc/kayobe.git
kayobe_version: stackhpc/yoga
kayobe_version: "{{ kayobe_version_branch | default('stackhpc/yoga') }}"
kayobe_name: kayobe

openstack_config_repo: https://github.com/stackhpc/openstack-config-multinode
Expand All @@ -16,9 +16,7 @@ openstack_config_name: openstack-config

vault_password_path: "~/vault.password"

ssh_key_path:

vxlan_vni:
ssh_key_path: "{{ cluster_ssh_private_key_file }}"

root_domain: sms-lab.cloud

Expand Down
2 changes: 1 addition & 1 deletion authentication.tf
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
resource "openstack_compute_keypair_v2" "keypair" {
name = var.multinode_keypair
public_key = file(var.ssh_public_key)
public_key = var.ssh_public_key
}
26 changes: 26 additions & 0 deletions group_vars/openstack.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# The default Terraform state key for backends that support it
terraform_state_key: "cluster/{{ cluster_id }}/tfstate"

# Set up the terraform backend
# This setup allows us to use the Consul backend when enabled without any changes
#terraform_backend_type: 'local'
terraform_backend_type: "{{ 'consul' if 'CONSUL_HTTP_ADDR' in ansible_env else 'local' }}"
terraform_backend_config_defaults:
consul:
path: "{{ terraform_state_key }}"
gzip: "true"
local: {}
terraform_backend_config: "{{ terraform_backend_config_defaults[terraform_backend_type] }}"

# These variables control the location of the Terraform binary
terraform_binary_directory: "{{ playbook_dir }}/bin"
terraform_binary_path: "{{ terraform_binary_directory }}/terraform"

# This controls the location where the Terraform files are rendered
terraform_project_path: "{{ playbook_dir }}"

# Indicates whether the Terraform operation is reconciling or removing resources
# Valid values are 'present' and 'absent'
terraform_state: "{{ cluster_state | default('present') }}"

cluster_ssh_user: "{{ ssh_user }}"
66 changes: 66 additions & 0 deletions multinode-app.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
---

- hosts: localhost
tasks:
- name: Show Playbook Directory
debug:
msg: "{{ playbook_dir }}"

- name: Template Terraform files into project directory
template:
src: terraform.tfvars.j2
dest: "{{ playbook_dir }}/terraform.tfvars"

- name: Template Terraform userdata.cfg.tpl files into project template directory
template:
src: "{{ playbook_dir }}/templates/userdata.cfg.tpl.j2"
dest: "{{ playbook_dir }}/templates/userdata.cfg.tpl"


# Provision the infrastructure

# The CaaS puts hosts for accessing the OpenStack
# API into the 'openstack' group
- hosts: openstack
roles:
- cluster_infra

- hosts: localhost
tasks:
# Check whether an ans_vlt_pwd variable is defined and if so, save it into a
# file called '~/vault.password'. If it doesn't exist, create a the
# '~/vault.password' file with ans_vlt_pwd = "password_not_set" as the
# password.
- name: Create vault password file
vars:
ans_dflt: 'default_password'
ansible.builtin.copy:
content: "{{ ans_vlt_pwd | default( ans_dflt , true ) }}"
dest: "~/vault.password"
mode: 0600

# If openstack_deploy is true then continue if not end the playbook.

# Import the playbook to start configuring the multi-node hosts.
- name: Configure hosts and deploy ansible
import_playbook: ansible/configure-hosts.yml
when: openstack_deploy == true


- hosts: ansible_control
vars:
ansible_pipelining: true
ansible_ssh_pipelining: true
tasks:
- name: Deploy OpenStack.
ansible.builtin.command:
cmd: "bash ~/deploy-openstack.sh"
when: openstack_deploy == true

# This is to get the ip of the ansible-controller host.
- hosts: localhost
tasks:
- debug: var=outputs
vars:
outputs:
cluster_access_ip: "{{ hostvars[groups['openstack'][0]].cluster_gateway_ip }}"
118 changes: 89 additions & 29 deletions outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@ output "ansible_control_access_ip_v4" {
value = openstack_compute_instance_v2.ansible_control.access_ip_v4
}

output "cluster_gateway_ip" {
value = openstack_compute_instance_v2.ansible_control.access_ip_v4
}

output "seed_access_ip_v4" {
value = openstack_compute_instance_v2.seed.access_ip_v4
}
Expand Down Expand Up @@ -75,38 +79,94 @@ resource "local_file" "deploy_openstack" {
file_permission = "0755"
}

resource "ansible_host" "control_host" {
name = openstack_compute_instance_v2.ansible_control.access_ip_v4
groups = ["ansible_control"]
output "cluster_nodes" {
description = "A list of the cluster nodes and their IP addresses which will be used by the Ansible inventory"
value = concat(
[
{
name = openstack_compute_instance_v2.ansible_control.name
ip = openstack_compute_instance_v2.ansible_control.access_ip_v4
groups = ["ansible_control"]
variables = {
ansible_user = var.ssh_user
}
}
],
flatten([
for node in openstack_compute_instance_v2.compute: {
name = node.name
ip = node.access_ip_v4
groups = ["compute"]
variables = {
ansible_user = var.ssh_user
}
}
]),
flatten([
for node in openstack_compute_instance_v2.controller: {
name = node.name
ip = node.access_ip_v4
groups = ["controllers"]
variables = {
ansible_user = var.ssh_user
}
}
]),
[{
name = openstack_compute_instance_v2.seed.name
ip = openstack_compute_instance_v2.seed.access_ip_v4
groups = ["seed"]
variables = {
ansible_user = var.ssh_user
}
}],
flatten([
for node in openstack_compute_instance_v2.storage: {
name = node.name
ip = node.access_ip_v4
groups = ["storage"]
variables = {
ansible_user = var.ssh_user
}
}
])
)
}

resource "ansible_host" "compute_host" {
for_each = { for host in openstack_compute_instance_v2.compute : host.name => host.access_ip_v4 }
name = each.value
groups = ["compute"]
}
# Template of all the hosts' configuration which can be used to generate Ansible varables.

resource "ansible_host" "controllers_hosts" {
for_each = { for host in openstack_compute_instance_v2.controller : host.name => host.access_ip_v4 }
name = each.value
groups = ["controllers"]
}
# resource "ansible_host" "control_host" {
# name = openstack_compute_instance_v2.ansible_control.access_ip_v4
# groups = ["ansible_control"]
# }

resource "ansible_host" "seed_host" {
name = openstack_compute_instance_v2.seed.access_ip_v4
groups = ["seed"]
}
# resource "ansible_host" "compute_host" {
# for_each = { for host in openstack_compute_instance_v2.compute : host.name => host.access_ip_v4 }
# name = each.value
# groups = ["compute"]
# }

resource "ansible_host" "storage" {
for_each = { for host in openstack_compute_instance_v2.storage : host.name => host.access_ip_v4 }
name = each.value
groups = ["storage"]
}
# resource "ansible_host" "controllers_hosts" {
# for_each = { for host in openstack_compute_instance_v2.controller : host.name => host.access_ip_v4 }
# name = each.value
# groups = ["controllers"]
# }

resource "ansible_group" "cluster_group" {
name = "cluster"
children = ["compute", "ansible_control", "controllers", "seed", "storage"]
variables = {
ansible_user = var.ssh_user
}
}
# resource "ansible_host" "seed_host" {
# name = openstack_compute_instance_v2.seed.access_ip_v4
# groups = ["seed"]
# }

# resource "ansible_host" "storage" {
# for_each = { for host in openstack_compute_instance_v2.storage : host.name => host.access_ip_v4 }
# name = each.value
# groups = ["storage"]
# }

# resource "ansible_group" "cluster_group" {
# name = "cluster"
# children = ["compute", "ansible_control", "controllers", "seed", "storage"]
# variables = {
# ansible_user = var.ssh_user
# }
# }
9 changes: 9 additions & 0 deletions requirements.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
collections:
- name: https://github.com/stackhpc/ansible-collection-terraform
type: git
version: 8c7acce4538aab8c0e928972155a2ccb5cb1b2a1
- name: cloud.terraform
- name: ansible.posix
roles:
- src: mrlesmithjr.manage_lvm
42 changes: 42 additions & 0 deletions roles/cluster_infra/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
---

- name: Install Terraform binary
include_role:
name: stackhpc.terraform.install

- name: Make Terraform project directory
file:
path: "{{ terraform_project_path }}"
state: directory

- name: Write backend configuration
copy:
content: |
terraform {
backend "{{ terraform_backend_type }}" { }
}
dest: "{{ terraform_project_path }}/backend.tf"

# Patching in this appliance is implemented as a switch to a new base image
# So unless explicitly patching, we want to use the same image as last time
# To do this, we query the previous Terraform state before updating
- block:
- name: Get previous Terraform state
stackhpc.terraform.terraform_output:
binary_path: "{{ terraform_binary_path }}"
project_path: "{{ terraform_project_path }}"
backend_config: "{{ terraform_backend_config }}"
register: cluster_infra_terraform_output

- name: Extract image from Terraform state
set_fact:
cluster_previous_image: "{{ cluster_infra_terraform_output.outputs.cluster_image.value }}"
when: '"cluster_image" in cluster_infra_terraform_output.outputs'
when:
- terraform_state == "present"
- cluster_upgrade_system_packages is not defined or not cluster_upgrade_system_packages


- name: Provision infrastructure
include_role:
name: stackhpc.terraform.infra
1 change: 1 addition & 0 deletions roles/requirements.yml
2 changes: 1 addition & 1 deletion templates/deploy-openstack.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ fi
if [[ "$(sudo docker image ls)" == *"kayobe"* ]]; then
echo "Image already exists skipping docker build"
else
sudo DOCKER_BUILDKIT=1 docker build --network host --build-arg BASE_IMAGE=$$BASE_IMAGE --file $${config_directories[kayobe]}/.automation/docker/kayobe/Dockerfile --tag kayobe:latest $${config_directories[kayobe]}
sudo DOCKER_BUILDKIT=1 docker build --network host --build-arg BASE_IMAGE=$BASE_IMAGE --file $${config_directories[kayobe]}/.automation/docker/kayobe/Dockerfile --tag kayobe:latest $${config_directories[kayobe]}
fi

set +x
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,6 @@ packages:
- git
- vim
- tmux
ssh_authorized_keys:
- "{{ cluster_deploy_ssh_public_key }}"
- "{{ cluster_user_ssh_public_key }}"
Loading