Publish Ansible GitOps Post
All checks were successful
Build Hugo Site / build (push) Successful in 11s

This commit is contained in:
Benjamin Hays 2024-11-28 18:46:20 -05:00
parent eae34baba2
commit 0d07ba6e47
2 changed files with 187 additions and 6 deletions

View File

@ -1,16 +1,197 @@
---
title: "Efficient and Secure GitOps using Gitea, Ansible, and Github Actions"
date: 2024-07-01
draft: true
description: The Polyfill supply chain attack highlights critical security vulnerabilities in the web ecosystem. What can be done to secure open-source dependencies and mitigate supply chain risks?
date: 2024-11-28
toc: true
description: Streamline your homelab infrastructure with Gitea Actions and Ansible. Learn how to automate server configurations, implement GitOps principles, and enhance security through self-hosted, reproducible deployment workflows.
tags:
- Security
- DevOps
- Network Automation
---
## The Problem
I run a pretty small homelab, for what it's worth. I have about 4 LXC containers and two QEMU virtual machines running at any given time, the majority of which just serve as convenient frontends to store my data.
![Picture of my Proxmox Container List](/images/proxmox-machines.png)
Despite the fact that I manage so few servers and services for my needs, I still find myself wanting some way to deploy changes across all of my machines simultaneously. For this purpose, I have used and maintained a set of Ansible playbooks which, for the most part, have suited my workflow.
Despite the fact that I manage so few servers and services for my needs, I still find myself wanting some way to deploy changes across all of my machines simultaneously. For this purpose, I have used and maintained a set of Ansible playbooks which, for the most part, have suited my workflow.
```sh
Ansible/
├── ansible.cfg
├── homelab-vault
│   └── secrets.yml
├── inventory
│   └── homelab.ini
├── playbooks
│   ├── debian.yml
│   ├── pki.yml
│   └── proxmox.yml
├── requirements.yaml
├── roles
│   ├── cloudflare-dns.yml
│   ├── debian.yml
│   ├── fail2ban.yml
│   ├── heartbeat.yml
│   └── openssh.yml
└── templates
└── dnscloudflare.ini.j2
```
These roles and playbooks are fairly minimal and not too complex but I do enjoy the sheer capacity for automation that Ansible affords. The real question is, if I want to ensure these policies (and updates) get sent to all my servers effectively, how do I make this occur automatically on a regular schedule without my direct oversight?
## Gitea Actions to the Rescue!
The answer, if the title did not make it obvious, is GitHub Actions (or rather Gitea Actions, but we'll get to that). Gitea were the missing piece in transforming my Ansible playbooks from a manual toolset to a fully automated configuration management system.
If you've never encountered it, [Gitea Actions](https://docs.gitea.com/usage/actions/overview) is essentially an open-source, self-hosted alternative to GitHub Actions, designed to provide continuous integration and continuous deployment (CI/CD) capabilities directly within a Git repository. Gitea Actions uses a workflow syntax nearly identical to GitHub's, making it remarkably easy to migrate or adapt existing workflows.
The workflow files are written in YAML, stored in the `.gitea/workflows/` directory of a repository. These files define exactly how automation should occur - specifying triggers, environment requirements, and the precise steps to execute. These workflow files allow Gitea Actions to spin up ephemeral runners - temporary Docker containers that execute the specified workflow steps in order. These runners are self-hosted, allowing me to use my own infrastructure for running jobs. In my homelab, this means I can run complex deployment and testing workflows without relying on external services in the cloud, paying for hosted runners, or most importantly create a security risk by allowing external traffic to manage my servers.
The following is the Gitea Action that generated the site you're currently reading:
```yml
# .gitea/workflows/build.yaml
name: Build Hugo Site
run-name: Static Site Generation
on: [push]
jobs:
build:
runs-on: ubuntu-latest
env:
HUGO_VERSION: 0.135.0
steps:
- name: Install Hugo CLI
run: |
wget -O ${{ runner.temp }}/hugo.deb https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_extended_${HUGO_VERSION}_linux-amd64.deb \
&& dpkg -i ${{ runner.temp }}/hugo.deb
- name: Checkout
uses: actions/checkout@v3
with:
submodules: recursive
- name: Build with Hugo
env:
HUGO_ENVIRONMENT: production
HUGO_ENV: production
run: |
hugo \
--minify
```
My workflows, like the one shown above are fairly rudimentary examples, but they give the general idea. Actions, much like Ansible playbooks, define precise steps: installing dependencies, checking out code, running linters, and executing programs. The entire process is controlled by these declarative YAML files, providing a transparent, repeatable method of continuous deployment.
## Gitea Actions for Ansible
Now that we've covered that primer, let's take a closer look at the two workflows that power my infrastructure automation: `ansible-deploy.yml` and `ansible-lint.yml`.
### Deployment Workflow
The `ansible-deploy.yml` workflow is the workhorse of my infrastructure management. Here's a breakdown of its key components:
```yaml
name: Ansible Deploy
on: [push]
jobs:
deploy:
runs-on: ubuntu-latest
env:
RUNNER_TOOL_CACHE: /toolcache
ANSIBLE_VERSION: "8.7.0"
strategy:
matrix:
playbook:
- Ansible/playbooks/debian.yml
- Ansible/playbooks/proxmox.yml
```
The workflow triggers on every push, using a matrix strategy to run multiple playbooks simultaneously. I've pinned the Ansible version to ensure consistency (and package caching) across deployments. This might seem like overkill for a small homelab (and it probably is), but it's a practice borrowed from enterprise-grade infrastructure management - predictability is key.
The SSH configuration is particularly crucial:
```yaml
steps:
- name: Copy SSH Key
run: |
mkdir ~/.ssh/
echo "Host *" > ~/.ssh/config
echo " StrictHostKeyChecking no" >> ~/.ssh/config
echo '${{secrets.SSH_PRIVATE_KEY}}' > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
```
This step sets up secure, passwordless SSH access to my servers. By storing the SSH key as an encrypted secret in my Gitea instance, I ensure that my credentials never touch the repository directly. The `StrictHostKeyChecking no` might raise eyebrows, but in a controlled homelab environment, it simplifies remote access.
The dependency and environment setup follows a careful, reproducible pattern:
```yaml
- name: Install Pip
run: |
apt update -y
apt install python3-pip -y
- name: "Cache python packages"
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-${{ env.ANSIBLE_VERSION }}
- name: Install Ansible
run: |
python3 -m pip install ansible==${{ env.ANSIBLE_VERSION }}
- name: Install Ansible Galaxy requirements
run: |
ansible-galaxy install -r Ansible/requirements.yaml
```
Caching Python packages might seem like a minor optimization, but it significantly speeds up repeated workflow runs. The explicit installation of Ansible Galaxy requirements ensures that all necessary roles and collections are available.
```yaml
- name: Run playbook
uses: dawidd6/action-ansible-playbook@v2
with:
playbook: ${{ matrix.playbook }}
directory: ./
key: ${{secrets.SSH_PRIVATE_KEY}}
vault_password: ${{secrets.VAULT_PASSWORD}}
options: |
--inventory Ansible/inventory/homelab.ini
--extra-vars "@Ansible/homelab-vault/secrets.yml
```
As mentioned before, the beauty of Gitea Actions is that I can easily grab a task made for GitHub and include it in my code. The instruction above is what does most of the heavy lifting, Ansible-wise.
Notice that the prior action specifies an Ansible Vault password and a reference to `secrets.yml`. This is an encrypted archive of CI/CD secrets used in the playbook that I have stored on a private Gitea repository which the action pulled in a previous step. This way I can use privileged information in my Ansible automation whilst keeping it safe and secure.
### Linting for Quality Assurance
The `ansible-lint.yml` workflow complements the deployment process by providing static code analysis:
```yaml
name: Ansible Lint
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Install Ansible-Lint
run: |
apt update -y
apt install python3-pip ansible -y
python3 -m pip install ansible-lint
- name: Checkout
uses: actions/checkout@v3
- name: Install Ansible Galaxy requirements
run: |
ansible-galaxy install -r Ansible/requirements.yaml
- name: Ansible-Lint
run: |
ansible-lint ./Ansible
```
This workflow runs on every push, automatically checking my Ansible code against best practices. It's like having a constant code review process, catching potential misconfigurations or style issues before they can cause problems. The ansible-lint tool can be overly pedantic about some things, but in general I've found it to be very helpful for fixing my anti-patterns.
## Conclusion
The beauty of these workflows lies in their simplicity and transparency. Every configuration change is tracked, every deployment is automated, and the entire process is documented directly in the repository. Now, it's very far from the most elegant solution, and I am certainly not a programmer at heart, but I've found this collection of tools to be very valuable in my home-lab journey. It has saved me from hours of work and made my network a good deal safer in the process. As always, thank you for reading!

View File

@ -62,10 +62,10 @@ url = "/posts"
#name = "about"
#url = "/about"
# Syntax highligth on code blocks
# Syntax highlight on code blocks
[markup]
[markup.highlight]
style = 'algol'
style = 'github'
[[deployment.matchers]]
pattern = "^.+\\.(js|css|svg|ttf|woff2)$"