Compare commits

...

4 Commits

Author SHA1 Message Date
19487527f9
add parser
All checks were successful
continuous-integration/drone/push Build is passing
2023-06-14 16:30:17 +02:00
28cc208e13
update drone and readme for repo based resources 2023-06-14 15:52:30 +02:00
ad46181f6e
remove resources from pdj repo 2023-06-14 14:51:59 +02:00
360d1f6b7e
remove identity file 2023-06-14 14:45:23 +02:00
13 changed files with 76 additions and 216 deletions

View File

@ -12,7 +12,8 @@ steps:
# - apt update -y # not needed with custom image # - apt update -y # not needed with custom image
# - apt install build-essential patchelf -y # not needed with custom image # - apt install build-essential patchelf -y # not needed with custom image
# - pip install nuitka # not needed with custom image # - pip install nuitka # not needed with custom image
- python -m nuitka --onefile run.py --include-data-dir=./resources=resources --output-filename="ProxmoxDeploy${DRONE_TAG##v}" # - python -m nuitka --onefile run.py --include-data-dir=./resources=resources --output-filename="ProxmoxDeploy${DRONE_TAG##v}" # not needed with new system with repo
- python -m nuitka --onefile run.py --output-filename="ProxmoxDeploy${DRONE_TAG##v}"
- name: gitea_release - name: gitea_release
image: plugins/gitea-release image: plugins/gitea-release
settings: settings:

View File

@ -10,13 +10,70 @@ Proxmox Deploy is a little script to manage my HomeLab with JSON file.
As my homelab was growing I realised that it was harder and harder to keep everything in sync and up to date. As my homelab was growing I realised that it was harder and harder to keep everything in sync and up to date.
So I decided to create a script to manage my Proxmox homelab. So I decided to create a script to manage my Proxmox homelab.
# How to use it # How it works
Have a look at the resources folder to see how to use it. The concept is simple, you have a Git repository with a the following structure:
```
.
├── config.json
├── lxc
│ ├── <id>
│ │ ├── config.json
│ │ ├── <your files>
│ │ └── <your folders>
│ └── <id>
│ ├── ...
│── qemu
│ ├── <id>
│ │ ├── config.json
│ │ ├── <your files>
│ │ └── <your folders>
│ └── <id>
│ ├── ...
│── scripts
│ ├── <your scripts>
│ └── ...
```
*See below for more information about the structure and differents files*
PDJ (Proxmox Deploy JSON) is a program that will read that repository files and execute the necessary commands to create/update your LXC/VM.
Now ideally you have some sort of Git actions like Drone/GitHub(/Gitea) Actions to run PDJ automatically when you push a change, which will result in your homelab being updated almost instantly.
If you don't, no big deal, you'll just have to manually clone and update your repo, then start PDJ *(you could use a crontab with regular intervals)*
# Usage
## Download
Download the pre-compiled binaries from the release page or build it yourself.
## Build it yourself
```bash
# Build on Debian
git clone <url of this repo>
cd ProxmoxDeploy
apt update && apt install -y build-essential patchelf
pip install nuitka
pip install -r requirements.txt
python -m nuitka --onefile run.py --output-filename="ProxmoxDeploy"
```
*Also see the ``Dockerfile`` and ``.drone.yml`` for more information.*
## Run it
```bash
# Run it
./ProxmoxDeploy --repo /path/to/repo
```
# Documentation # Documentation
## Configuration ## Configuration
### General
Before configuring your LXC and VM, you must decide how you'll run this program.
As it requires SSH for some actions, you have two options:
- Run it directly on the Proxmox VE host **(recommended)** *(refered as `local`)*
- No configuration needed
- You can use Git(ea)/Drone actions to run it automatically as soon as a change is pushed.
- Run it on another machine and connect via SSH
- You'll need to setup passwordless SSH connection between your machine and the Proxmox VE host.
### Proxmox VE ### Proxmox VE
The Proxmox VE configuration is located in the `config.json` file. The Proxmox VE configuration is located in the `config.json` file.
```json ```json

View File

@ -1,10 +0,0 @@
{
"pve":{
"host": "192.168.11.99",
"user": "root",
"port": 22,
"local": false
},
"settings": {
}
}

View File

@ -1,49 +0,0 @@
{
"lxc_hostname": "traefik",
"os": {
"name": "alpine",
"release": "3.17"
},
"resources": {
"cpu": 2,
"memory": 1024,
"swap": 256,
"disk": 8,
"storage": "local-lvm"
},
"network": {
"bridge": "vmbr0",
"ipv4": "dhcp",
"ipv6": "auto",
"mac": "92:A6:71:77:8E:D8",
"gateway4": "",
"gateway6": "",
"vlan": ""
},
"options": {
"privileged": false,
"start_on_boot": false,
"startup_order": 2,
"password": "qwertz1234",
"tags": "2-proxy+auth"
},
"creation": {
"conditions": {
"programs": ["docker"],
"folders": ["/var/data/traefik", "/var/data/config/traefik"],
"files": ["/var/data/traefik/traefik.toml", "/var/data/config/traefikv2/docker-compose.yml"]
},
"steps": [
{
"type": "script",
"local_path": "global/scripts/install-docker.sh"
},
{
"type": "folder_copy",
"path": "data/",
"destination": "/var/"
}
]
},
"deploy": {}
}

View File

@ -1,40 +0,0 @@
version: "3"
services:
app:
image: traefik:v2.9
env_file: /var/data/config/traefikv2/traefik.env
restart: always
ports:
- "80:80" # http
- "443:443" # https
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /var/data/config/traefikv2/dyn:/dyn
- /var/data/config/traefikv2/traefik.toml:/etc/traefik/traefik.toml
- /var/data/traefik/traefik.log:/traefik.log
- /var/data/traefik/access.log:/access.log
- /var/data/traefik/acme.json:/acme.json
labels:
- "traefik.enable=true"
- "traefik.http.routers.api.rule=Host(`traefik.xyz.abc`)"
- "traefik.http.routers.api.entrypoints=https"
- "traefik.http.routers.api.service=api@internal"
- "traefik.http.services.dummy.loadbalancer.server.port=9999"
- "traefik.http.routers.api.tls=true"
- "traefik.http.routers.api.tls.domains[0].main=xyz.abc"
- "traefik.http.routers.api.tls.domains[0].sans=*.xyz.abc"
- "traefik.http.routers.api.tls.certresolver=cloudflare"
networks:
- traefik_public
logging:
driver: "json-file"
options:
max-size: "2m"
max-file: "2"
networks:
traefik_public:
external: true

View File

@ -1,3 +0,0 @@
# CloudFlare example
CLOUDFLARE_EMAIL=me@xyz.abc
CLOUDFLARE_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

View File

@ -1,52 +0,0 @@
[global]
checkNewVersion = true
# Enable the Dashboard
[api]
dashboard = true
# Write out Traefik logs
[log]
level = "INFO"
filePath = "/traefik.log"
# [accessLog]
# filePath = "/access.log"
[entryPoints.http]
address = ":80"
# Redirect to HTTPS (why wouldn't you?)
[entryPoints.http.http.redirections.entryPoint]
to = "https"
scheme = "https"
[entryPoints.http.forwardedHeaders]
trustedIPs = ["10.0.0.0/8", "172.16.0.0/16", "192.168.0.0/16", "fc00::/7"]
[entryPoints.https]
address = ":443"
[entryPoints.https.http.tls]
certResolver = "cloudflare"
[entryPoints.https.forwardedHeaders]
trustedIPs = ["10.0.0.0/8", "172.16.0.0/16", "192.168.0.0/16", "fc00::/7"]
# Cloudflare
[certificatesResolvers.infomaniak.acme]
email = "me@xyz.abc"
storage = "acme.json"
[certificatesResolvers.infomaniak.acme.dnsChallenge]
provider = "cloudflare"
resolvers = ["1.1.1.1:53", "8.8.8.8:53"]
# Docker Traefik provider
[providers.docker]
endpoint = "unix:///var/run/docker.sock"
swarmMode = false
watch = true
exposedByDefault = false
[providers.file]
directory = "/dyn"
watch = true

View File

@ -1 +0,0 @@
{}

View File

@ -1,54 +0,0 @@
#!/bin/bash
if which docker >/dev/null 2>&1; then
echo "Docker is installed"
exit 1
else
echo "Docker is not installed"
fi
if lsb_release -a 2>/dev/null | grep -q -E "Debian"; then
echo "Running Debian"
sudo apt-get remove docker docker-engine docker.io containerd runc -y
sudo apt-get update -y
sudo apt-get install \
ca-certificates \
curl \
gnupg -y
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" |
sudo tee /etc/apt/sources.list.d/docker.list >/dev/null
sudo apt-get update -y
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
elif lsb_release -a 2>/dev/null | grep -q -E "Ubuntu"; then
echo "Running Ubuntu"
sudo apt-get remove docker docker-engine docker.io containerd runc -y
sudo apt-get update -y
sudo apt-get install ca-certificates curl gnupg -y
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" |
sudo tee /etc/apt/sources.list.d/docker.list >/dev/null
sudo apt-get update -y
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
elif cat /etc/os-release 2>/dev/null | grep -q -i "alpine"; then
echo "Running Alpine"
apk add docker docker-compose
addgroup username docker
rc-update add docker default
service docker start
else
echo "Unknown distribution"
exit 1
fi

11
run.py
View File

@ -1,7 +1,18 @@
import argparse
import logging import logging
from src import main from src import main
if __name__ == '__main__': if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-v", "--verbose", help="increase output verbosity", action="store_true")
args = parser.parse_args()
if args.verbose:
logging.basicConfig(format='[%(levelname)s] : %(message)s', level=logging.DEBUG) logging.basicConfig(format='[%(levelname)s] : %(message)s', level=logging.DEBUG)
else:
logging.basicConfig(format='[%(levelname)s] : %(message)s', level=logging.INFO)
main.run() main.run()

View File

@ -177,7 +177,7 @@ def run_command_on_pve(command: str, return_status_code: bool = False, exception
logging.debug(f"Running command on PVE (ssh): {command}") logging.debug(f"Running command on PVE (ssh): {command}")
# catch errors code # catch errors code
command = subprocess.run(f'ssh -i {get_identity_file()} {username}@{host} -p {port} "{command}"', shell=shell, capture_output=True, command = subprocess.run(f'ssh {username}@{host} -p {port} "{command}"', shell=shell, capture_output=True,
encoding="utf-8") encoding="utf-8")
# If return code is not 0 and that exception_on_exit is True and return_status_code is False, throw an exception # If return code is not 0 and that exception_on_exit is True and return_status_code is False, throw an exception
@ -414,7 +414,7 @@ def copy_file_to_pve(path: Path, destination: str):
config = get_config() config = get_config()
# copy the file to the PVE from the local machine # copy the file to the PVE from the local machine
run_command_locally(command=f"scp -i {get_identity_file()} {str(path)} {config['pve']['user']}@{config['pve']['host']}:{destination}", run_command_locally(command=f"scp {str(path)} {config['pve']['user']}@{config['pve']['host']}:{destination}",
return_status_code=True) return_status_code=True)