IaC for my homelab
iac
This repository has been archived on 2025-03-03. You can view files and clone it, but cannot push or open issues or pull requests.
Find a file
2025-02-27 12:01:04 +00:00
.forgejo/workflows remove docker arrs 2025-02-25 23:56:20 -05:00
.github update renovate config for flux 2025-02-26 00:28:16 -05:00
ansible update readme and apt playbook 2025-02-23 05:38:31 -05:00
docker ⬆️ Update cloudflare/cloudflared Docker digest to bf3cda9 2025-02-27 12:01:04 +00:00
kubernetes rm authentik from main branch 2025-02-27 00:27:16 -05:00
packer update for kubernetes 2025-02-08 13:43:47 -05:00
terraform ⬆️ Update Terraform proxmox to v0.73.0 2025-02-26 00:38:37 -05:00
.gitignore init add kubernetes 2025-02-07 23:11:48 -05:00
README.md move flaresolverr to helmrelease () 2025-02-25 23:18:31 -05:00

Yamllint CD Ansible Tofu Renovate Pulls Header Image

iac (wip)

This is my homelab infrastructure, defined in code.


Hypervisor OS Tools Firewall Misc. Automations
Proxmox Debian Ubuntu Forgejo Docker Kubernetes Renovate OpenTofu Packer Ansible pfSense n8n Actions

📖 Overview

This repository contains the IaC (Infrastructure as Code) configuration for my homelab.

Most of my homelab runs on Proxmox, with VMs managed and maintained using OpenTofu. All VMs are cloned from templates I created with Packer.

All services are containerized, either managed with Docker Compose or orchestrated with Kubernetes (K3s). Over time, Ive been migrating everything to Kubernetes using GitOps practices, which is my long-term goal.

To automate infrastructure updates, I use Forgejo Actions, which trigger workflows upon changes to this repo. This ensures seamless deployment and maintenance across my homelab:

  • Flux manages Continuous Deployment (CD) for Kubernetes, bootstrapped via OpenTofu.
  • Docker CD Workflow handles Continuous Deployment for Docker services.
  • Renovate keeps services updated by opening PRs for new versions.
  • Yamllint ensures configuration files are properly structured.
  • Ansible is used to execute playbooks on all of my VMs, automating management and configurations

🔒 Security & Networking

For Secret management I use Bitwarden Secrets and their various integrations into the tools used.

Kubernetes is using SOPS with Age encryption until migration over to Bitwarden Secrets.

I use Oracle Cloud for their Always-Free VM's and deploy Docker services that require uptime here (Uptime Kuma, this website). Twingate is used to connect my home network to the various VPS's securely using Zero Trust architecture.

I use Cloudflare for my DNS provider with Cloudflare Tunnels to expose some of the services to the world. Cloudflare Access is used to restrict the access to some of the services, this is paired with Fail2Ban looking through all my reverse proxy logs for malicious actors who made it through Access and banning them via Cloudflare WAF.

For my home network I use PfSense with VLAN segmentation and strict firewall rules to isolate public-facing machines, ensuring they can only communicate with the necessary services and nothing else.

📊 Monitoring & Observability

I use a combination of Grafana, Loki, and Prometheus with various exporters to collect and visualize system metrics, logs, and alerts. This helps maintain visibility into my infrastructure and detect issues proactively.

  • Prometheus Metrics collection and alerting
  • Loki Centralized logging for containers and VMs
  • Grafana Dashboarding and visualization
  • Exporters Node Exporter, cAdvisor, Blackbox Exporter, etc.

🧑‍💻 Getting Started

This repo is not structured like a project you can easily replicate. Although if you are new to any of the tools used I encourage you to read through the directories that make up each tool to see how I am using them.

Over time I will try to add more detailed instructions in each directories README.

Some good references for how I learned this stuff (other than RTM)

🖥️ Hardware

Name Device CPU RAM Storage GPU Purpose
Arc-Ripper Optiplex 3050 Intel i5-6500 32 GB DDR4 1TB NVMe Arc A310 Jellyfin Server, Blu-ray Ripper
PVE Node 1 Custom Intel i7-9700K 64 GB DDR4 NVMe for boot and VMs, 4x4TB HDD RaidZ10 Nvidia 1660 6GB Main node with most VMs, NAS
PVE Node 2 Custom Intel i7-8700K 64 GB DDR4 1x2TB NVMe Nvidia 1060 6GB More VMs

📌 To-Do

See Project Board