All checks were successful
Master branch: build & lint / build (push) Successful in 1m8s
|
||
---|---|---|
.gitea/workflows | ||
group_vars | ||
playbooks | ||
roles | ||
.ansible-lint | ||
.gitignore | ||
.yamllint.yml | ||
ansible.cfg | ||
inventory.ini | ||
Makefile | ||
README.md | ||
requirements.txt | ||
requirements.yml |
ansible-polytech-2024
This is the repo that will serve as the support for the ASR practical work session of january. You should probably work in this directory and add code to it, it is going to be easier than creating a new ansible repository from scratch. You are also welcome to commit to this repository to checkpoint your work, as well as push it into any repository that polytech gives you access to.
The goal here is to deploy a Synapse Matrix server, an opensource decentralised messaging app, using industry standard tools such as ansible and docker.
By the end of this lab you'll be able to message each other in an end to end encrypted fashion using your own servers!
How it's going to work
- Log into https://gitea.plil.fr with your school credentials
- Then fork this repository (you need to do that only once per group)
- Modify the repo's code as you see fit and remember to commit and push often. This is how Xavier and myself are going to assess progress and be able to grade your work.
Optionally I would recomment you to add this repository as a remote so you can pull fixes should I have to push emergency fixes to the instructions (it happens more or less every year):
$ git remote add upstream https://git.plil.fr/tmaurice/polytech-ansible
# If you want to get the latest code merged with your branch
$ git pull upstream master
⚠️ Please use explicit and descriptive commit messages when committing to git.
Optional setup: a gitea runner for continous integration
You can if you so wish deploy a gitea runner somewhere, preferably on your zabeth. I would recommend you set the runner up for your user, and enable the "actions" in the settings of your newly created repo.
This way every time you push your code, the checks in .gitea/workflows
will be ran and you will have a live preview of the state of your code (how is the linting doing, what is the code quality, etc etc).
If the consensus is "fuck that ain't nobody got time for that" I can also deploy one from my own server somewhere and you'll be able to use the runner for the entire length of the lab.
⚠️ you can run your gitea actions locally by running make action
after installing the act
binary
The default provided CI pipelines only check for YAML and Ansible correctness, but you could imagine workflows where after these tests are run and are successful, we trigger an ansible run directly from the CI and the runner itself. Feel free to try :)
Setup
⚠️ This setup has to be done on your Zabeth host.
You will need to install a few things to get started, buckle up.
Firstly you need to export some environment variables in your working terminal to be able to talk to the outside world using the school's proxy:
export https_proxy=http://proxy.polytech-lille.fr:3128
Also, depending on the operating system installed on your virtual machines (Ubuntu server should not have the issue, Debian might), you should install python 3:
# apt install python3
The virtualenv
Since Ansible is written in python and we don't want to install it in the system, you will need to create a virtual environment. These are used to have your python stuff installed, without making them available system-wide, we are doing this to avoid polluting your lab machine with things that won't be used after today.
To create the virtualenv you need to run the following:
$ python3 -m venv ~/.ansible-venv
# then you want to "activate" the venv, you will need to do this for every new terminal you open
$ . ~/.ansible-venv/bin/activate
⚠️ you need to run ~/.ansible-venv/bin/activate
every time you want to open a new terminal and use ansible in it, otherwise it just won't work because the ansible binary won't be found.
Install ansible
Install ansible via pip after entering the virtualenv
$ pip install -r requirements.txt
At this point you should have ansible installed.
Install the docker role
Install the docker role using ansible galaxy (ansible galaxy is a sort of package manager for ansible).
$ ansible-galaxy install -r requirements.yml
At this point you should be good to go!
Generate an SSH key if you don't have one already
You'll need an SSH key if you don't have one already
If ssh-keygen
complains about the key already existing, just reuse the existing key in case someone else needs it.
$ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519 -P ''
⚠️ In real life, don't use -P ''
because it creates your SSH key without a passphrase, it is ok for this lab, not for real life.
⚠️ Never, ever, EVER commit an unencrypted SSH keys to a git repository. If you need to add the key to the repository for some reason I would recommend you look up something like sops to store an encrypted version of the secret in the repository.
Lastly, look into group_vars/all.yml
, go at the end of the file and add the created public key in the root_user.default_root_keys (from ~/.ssh/id_ed25519.pub
, or any other keys you created before hand).
One more thing, update your inventory
You can now update your inventory
file by modifying it with your new values (hostnames and ip addresses for the machines you'll be working with).
The IP addresses you should put in there are the IPs that you have set in the VLAN50 (or the DNS names you have associated with it, if applicable)
Read the roles to understand how everything works !
Ansible runs playbooks
, which are collections of roles
that in turn are a collection of tasks
. tasks
are instructions like "install this package", "copy this file", "create this directory", "install this service", "create this container" and so on and so forth.
I have very much documented the example roles in ./roles
and I would greatly encourage you to read them to understand how to do basic stuff in ansible such as copying a file, starting a service and so on. If you do not do that, you will be lost and won't understand anything that is coming at you. The playbooks on the other hand are stored in the ./playbooks/
directory, feel free to look at them to inspire your own playbooks for this lab.
Check everything works properly
You should now be able to actually run
ansible to execute the base.yml
playbook.
$ ansible-playbook -vi inventory -l all playbooks/base.yml
The -i
flag specifies the inventory file to use, the -l
file limits which hosts it applies to, either by hostname or group name, here we apply it to all the hosts.
If you cannot connect to the host with a permission denied
error, rerun the command using the -k
flag, to make ansible ask you for the host password.
# make sure the sshpass program is installed on the Zabeth
$ sudo apt install -y sshpass
$ ansible-playbook -vi inventory -k -l all playbooks/base.yml
If that does not work, add it to the ~/.ssh/authorized_keys
in the root
home folder on every one of your virtual machines, make sure the authorized_keys
file has 600
permissions. If the permissions are too wide then sshd will refuse to use it.
While you are at it I would recommend you install docker as well using the docker.yml
playbook
Good, now you are good to go !
Install a database server
To deploy Synapse and Mastodon, you need to deploy a database server. We are going to use Postgres in this lab. You will for this need to use the community.postgres_db
module for this. The community.*
modules are modules written by the community and available to everyone, you will encounter similar modules when you will want to start deploying docker container!
For more details to do this, I refer you to the official ansible postgres documentation.
Note that you will need to install (with ansible !)
postgresql
python3-psycopg2
If you are feeling adventurous you can install postgresql using docker.
Either way, you need to install this on your DB machine, which should be the one with the least amount of RAM.
Go check the configuration of /etc/postgresql/15/main/pg_hba.conf
and /etc/postgresql/15/main/postgresql.conf
to make sure you can connect from 0.0.0.0
.
On the pg_hba.conf
file add a line like that at the end:
host all all <the netmask you want to add> scram-sha-256
You can (and should) use the postgresql_pg_hba module for that. This module controles how clients can connect/authorise themselves to postgres.
The netmask should look like A.B.C.D/E
like 0.0.0.0/0
for everyone, for example.
Also change the listen_addresses
instruction to '*'
in postgresql.conf
.
Then create the user and the database. Please note that you need to pass the following parameters to the database you are creating, otherwise Synapse will most likely complain about encoding issues and refuse to start:
encoding: "UTF-8"
lc_collate: "C"
lc_ctype: "C"
Bonus point if you store the username in an encrypted fashion in your repository with either ansible vault or sops.
If you end up using sops, please also encrypt your secret with the following age public key so I am able to decrypt them later on age18rkuwwpzl3az5gr093uhvk7cwg348eajxsm9fjansur5qa97csfs597zh6
, this can be achieved running sops <file> --add-age <my key>
Install Nginx
You need to install nginx (or apache2, or traefik, whichever you are more comfortable with, but be aware I'll only be able to help with nginx and traefik as I have no idea how apache2 works). Use ansible for this, and install the nginx-extras
package.
Create a certificate for our deployments
While you are at it you might as well create a DNS A
fields to point at OpenWRT (which will forward it to nginx).
We need to use a certificate to secure HTTPS communication, the Matrix protocols require it. This can be done manually for the moment and automated later, as it is not super straightforward. I refer you to the documentation on certbot + nginx I would recommend that you create a matrix.<yourdomain>
certificate, then back them up somewhere safe as Letsencrypt has pretty aggressive rate limiting with regard to certificate creations.
Let's install Synapse
Synapse is the messaging app I told you about.
Preconfigure synapse
I would recommend you read quickly through this synapse install guide so you understand what you are doing before touching anyting!
In essence you need to run the following command in your lab machine:
$ docker run -it --rm \
-v $(pwd)/synapse-data:/data \
-e SYNAPSE_SERVER_NAME=matrix.<yourdomain> \
-e SYNAPSE_REPORT_STATS=no \
matrixdotorg/synapse:latest generate
⚠️ careful this is not the same command as in the documentation above, this will create the files you need in a synapse-data
directory in your working directory, which is probably what you want in order to send them to your host via ansible.
Create a new role and copy over these files to roles/synapse/files
.
Next customise the configuration file. If you feel like using postgres (you already set it up so why not, right ?) check the docs.
Another approach that would be cleaner, but potentially harder, would be to tell ansible to run the configuration bootstrap command on the synapse machine if the homeserver.yaml
file cannot be found, and then potentially customise the file.
Create the synapse docker container
Normally your synapse
module does not have any tasks yet, so go ahead and create a roles/synapse/tasks/main.yml
file. You need to use the community.docker.docker_container
module for this. In essence you need to make sure of a few things:
- you want your container to port forward port 8008 in the host
to 8008 in the container either that or you can just use the
network_mode: host
in the definition - you want to mount the directory in which you copied the
homeserver.yaml
,matrix.<yourdomain>.log.config
andmatrix.<yourdomain>.signing.key
to/data
in the container.
For example, if I wanted to start a redis
container that forwards 6379 and mounts /tmp
I would do something like
---
- name: creates the redis container
community.docker.docker_container:
name: "redis"
image: "{{ redis_image }}:{{ redis_image_tag }}"
state: started
recreate: yes
ports:
- 6379:6379/tcp
volumes:
- /tmp:/tmp
restart_policy: "unless-stopped"
Note restart_policy: "unless-stopped"
that ensures that your container is going to:
- Be started on boot
- Restart in case it crashes
ℹ️ Note that I used {{ redis_image }}
in the manifest above. These are variables that are available in the same fashion as they are for templates which use the jinja2 markup language. If you want to create a default variable for your module, you can create a roles/<rolename>/defaults/main.yml
that will contain something like this for example
---
redis_image: redis
redis_image_tag: latest
You can override this variable if you want in any of the following:
vars
section of a playbook filehost_vars/<hostname>.yml
file for host specific overridesgroup_vars/<group>.yml
file for group specific variables
Anyways back to synapse, create a playbook real quick just containing the synapse role
---
- hosts: all
roles:
- synapse
Then apply it like we have seen above (ansible-playbook -vi inventory -l <dockerhost> playbooks/synapse.yml
)
You should now have a running container by now, that has a directory mounted with your configuration. Check your container works with the docker ps
command, you should see something like
root synapse.lil.maurice.fr ~ # docker ps | grep synapse
95aba77cc320 matrixdotorg/synapse:v1.74.0 "/start.py" 2 weeks ago Up 6 days (healthy) 8009/tcp, 0.0.0.0:8008->8008/tcp, 8448/tcp synapse
Configure the nginx reverse proxy
You can see from the configuration that matrix listens on port 8008
for federation and client connexions.
This means you will need to create one reverse proxy in your Nginx configuration, that will listen on port 443, that will utilise the TLS certificate you have generated earlier, and listens on the servername matrix.<yourhost>
. To do that, update the configuration of your Nginx role to upload a file to /etc/nginx/sites-enabled/matrix
looks like something like
server {
listen 443 ssl;
server_name matrix.<yourdomain>;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_certificate <your certificate>.crt;
ssl_certificate_key <your key>.key;
location / {
proxy_pass http://127.0.0.1:8008;
}
}
After a reload of nginx it should work properly.
Configure openwrt
To make you able to talk to synapse, you need to configure openwrt to send all packets from ports 443 and 8448 (the default federation port) to your docker host on port 443. You can do that either with the UI or with the command line config
Create an admin user
Note about the matrix addresses, a matrix address looks like this:
@<localpart>:<homeserver>
For example my address is @thomas:matrix.maurice.fr
, the localpart is thomas
and the homeserver is matrix.maurice.fr
In the following step you can create one or two users (for your team) to connect to the homeserver, pick the localpart carefully because that will be your username.
Get into your synapse host and run
$ docker exec -it synapse register_new_matrix_user http://localhost:8008 -c /data/homeserver.yaml
Fill in the localpart, the password and you should be good to go.
Test it works
Check you can login
The Matrix people wrote a nice web based application for using the protocol, called Riot. Go to Riot, log in by specifying the homeserver URL, username and password, and verify the login goes smoothly. If it does not, check the logs.
Test the federation works
The federation is required to work for you to be able to speak to users on other servers !
Enter your hostname into the matrix federation tester and check the federation works
Test by sending a message to another address
Fire a message to @thomas:matrix.maurice.fr
, or better yet, send a message to one of your colleague on another homeserver.
Congratulations you reached the end !
This is pretty cool, however there is more you can do in the optional objectives of the lab, I would suggest you have a look at:
- Setup a Gitea runner and make sure your tests pass
- Add more tests ?
- Using a mechanism (sops/ansible vault/hashicorp vault) to store secrets in the repository and make it accessible to the gitea runner
- Finally, run ansible from the CI so your playbooks get applied when you push new code to the repository