mirror of
https://github.com/KevinMidboe/schleppe-ha-project.git
synced 2026-01-08 01:45:31 +00:00
Compare commits
4 Commits
10284ed956
...
1928ab73dd
| Author | SHA1 | Date | |
|---|---|---|---|
| 1928ab73dd | |||
| 78729ebd1e | |||
| 58d495350f | |||
| 6fc2e818e4 |
143
README.md
143
README.md
@@ -1,55 +1,85 @@
|
||||
# schleppe High Availability project
|
||||
|
||||
Goal is to have better webapp uptime for than AWS.
|
||||
|
||||
Defines code which describes a HA & cached scalable way of serving web applications.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
+-----------------------------------------------------------+
|
||||
| REGION: EU |
|
||||
| Domain: schleppe.cloud |
|
||||
| |
|
||||
| +-------------- Floating IP ---------+ |
|
||||
| | | |
|
||||
| +----+---------+ +----+---------+ |
|
||||
| | HAProxy #1 | | HAProxy #2 | |
|
||||
| +----+---------+ +----+---------+ |
|
||||
| \__________ active / standby _______/ |
|
||||
| | |
|
||||
| v |
|
||||
| +------+--------+ |
|
||||
| | haproxy (a) | |
|
||||
| +----+----+--+--+ |
|
||||
| | | A |
|
||||
| direct | | | via cache |
|
||||
| | v | |
|
||||
| | +-+--+---------+ |
|
||||
| | | varnish (n) | |
|
||||
| | +------+-------+ |
|
||||
| | | HIT / MISS |
|
||||
| | | |
|
||||
| +---------+ |
|
||||
| | |
|
||||
| v |
|
||||
| +---------+-------+ |
|
||||
| | web server (n) | |
|
||||
| +-----------------+ |
|
||||
| |
|
||||
+-----------------------------------------------------------+
|
||||
| +-----DNS (Cloudflare)-----+ |
|
||||
| | round-robin A records | |
|
||||
| +--------------------------+ |
|
||||
| │ |
|
||||
| ┌─────────────────┴─────────────────┐ |
|
||||
| │ │ |
|
||||
| A: 193.72.45.133 B: 45.23.78.120 |
|
||||
| (SITE A) (SITE B..N) |
|
||||
+------------+-----------------------------------+----------+
|
||||
│ └────────────────┐
|
||||
v v
|
||||
+----------------------------------------------------+ +--------------------+
|
||||
| Site A (REGION: EU) | | Site B..N |
|
||||
| | | (Copy of site A) |
|
||||
| +----------- Floating IP (keepalived/etcd) ---+ | +--------------------+
|
||||
| | | |
|
||||
| | +-------------+ +-------------+ | |
|
||||
| | | HAProxy-1 | | HAProxy-2 | | |
|
||||
| | | (ACTIVE) | | (STANDBY) | | |
|
||||
| | +------+------+ +-------+-----+ | |
|
||||
| | └─── active / standby ──┘ | |
|
||||
| | | |
|
||||
| +----------------------+----------------------+ |
|
||||
| │ |
|
||||
| (SSL termination + readiness checks) |
|
||||
| │ |
|
||||
| v |
|
||||
| +-------+---------+ |
|
||||
| | haproxy (LB) | |
|
||||
| +-----+----+--+---+ |
|
||||
| │ │ A |
|
||||
| direct │ │ │ via cache |
|
||||
| │ v │ |
|
||||
| │ +-+--+---------+ |
|
||||
| │ | varnish (n) | |
|
||||
| │ +------+-------+ |
|
||||
| │ │ HIT / MISS |
|
||||
| │ │ |
|
||||
| └─────────┤ |
|
||||
| │ |
|
||||
| v |
|
||||
| +---------+--------+ |
|
||||
| | web servers (n) | |
|
||||
| +------------------+ |
|
||||
| |
|
||||
+----------------------------------------------------+
|
||||
```
|
||||
|
||||
Where varnish & web server is 2-n number of instances. Currently two regions, EU & US.
|
||||
Where varnish & web server are minimum of 2 instances. Currently three regions, EU, US & schleppe on-prem.
|
||||
There is always only a single haproxy (with fallback) routing traffic per site, but multiple varnish & webservers all connected together w/ shared routing tables.
|
||||
|
||||
## Configure environment
|
||||
|
||||
Ensure that the following environment variables exist. It is smart to disable history in your terminal before pasting any API keys, (`unset HISTFILE` for bash, or `fish --private` for fish).
|
||||
|
||||
- `CLOUDFLARE_API_TOKEN`: update DNS for given zones
|
||||
- `HCLOUD_TOKEN`: permissions to create cloud resources
|
||||
|
||||
## infrastructure
|
||||
|
||||
Configured cloud resources in hezner with Pulumi.
|
||||
|
||||
```bash
|
||||
cd hetzner-pulumi
|
||||
|
||||
# first time, init pulumi stack (name optional)
|
||||
pulumi stack init kevinmidboe/hetzner
|
||||
|
||||
# required configuration values
|
||||
pulumi config set sshPublicKey "$(cat ~/.ssh/id_ed25519.pub)"
|
||||
pulumi config set --secret hcloud:token $HETZNER_API_KEY
|
||||
|
||||
# up infrastructure
|
||||
pulumi up
|
||||
@@ -63,9 +93,11 @@ pulumi up
|
||||
|
||||
Ansible is used to provision software and environments for software needed and services.
|
||||
|
||||
get ansible configuration values from pulumi output:
|
||||
Get ansible configuration values from pulumi output:
|
||||
|
||||
```bash
|
||||
cd ansible
|
||||
|
||||
# generate inventory (manualy update inventory file)
|
||||
./scripts/generate-inventory.sh | pbcopy
|
||||
|
||||
@@ -74,7 +106,7 @@ get ansible configuration values from pulumi output:
|
||||
./scripts/update-config_webserver-ips.sh
|
||||
```
|
||||
|
||||
run playbooks:
|
||||
Run playbooks:
|
||||
|
||||
```bash
|
||||
# install, configure & start haproxy
|
||||
@@ -88,14 +120,43 @@ ansible-playbook plays/docker.yml -i hetzner.ini -l web
|
||||
ansible-playbook plays/web.yml -i hetzner.ini -l web
|
||||
```
|
||||
|
||||
# Manual steps
|
||||
### ansible play: haproxy
|
||||
|
||||
- [x] floating ip DNS registration
|
||||
- [x] extract variables from pulumi stack outputs
|
||||
- [ ] add all cloudflare api keys
|
||||
- `mkdir /root/.ssh/certbot/cloudflare_k9e-no.ini`
|
||||
- [ ] generate certs for appropriate domains
|
||||
- `certbot certonly --agree-tos --dns-cloudflare --dns-cloudflare-credentials /root/.secrets/certbot/cloudflare_k9e-no.ini -d k9e.no`
|
||||
- [ ] combine generated certs into a cert for traefik
|
||||
- `cat /etc/letsencrypt/live/k9e.no/fullchain.pem /etc/letsencrypt/live/k9e.no/privkey.pem > /etc/haproxy/certs/ssl-k9e.no.pem`
|
||||
roles:
|
||||
- haproxy
|
||||
- certbot
|
||||
|
||||
The vars `haproxy_varnish_ip` & `haproxy_traefik_ip` defines IPs iterated over when copying template to hosts. These respectively point to available varnish cache servers & webservers.
|
||||
> `certbot_cloudflare_domains` runs certbot to make sure valid certs exists for instances serving traffic attached to DNS.
|
||||
|
||||
### ansible play: varnish
|
||||
|
||||
roles:
|
||||
- varnish
|
||||
|
||||
installs and configures varnish. Iterates over all `haproxy_traefik_ip` when copying varnish.vcl template. Make sure to update these IP's with the current webservers we want to point varnish to. These should match the same webservers haproxy might directly point at if not proxying through varnish.
|
||||
|
||||
### ansible play: docker + web
|
||||
|
||||
|
||||
## manual steps / TODO
|
||||
|
||||
Still issuing certs manually:
|
||||
|
||||
```bash
|
||||
cd /root/.secrets/certbot
|
||||
|
||||
touch cloudflare_k9e-no.ini; touch cloudflare_planetposen-no.ini; touch cloudflare_schleppe-cloud.ini
|
||||
|
||||
certbot certonly --dns-cloudflare --dns-cloudflare-credentials /root/.secrets/certbot/cloudflare_schleppe-cloud.ini -d whoami.schleppe.cloud --agree-tos && \
|
||||
certbot certonly --dns-cloudflare --dns-cloudflare-credentials /root/.secrets/certbot/cloudflare_k9e-no.ini -d k9e.no --agree-tos && \
|
||||
certbot certonly --dns-cloudflare --dns-cloudflare-credentials /root/.secrets/certbot/cloudflare_planetposen-no.ini -d planetposen.no --agree-tos
|
||||
|
||||
cat /etc/letsencrypt/live/k9e.no/fullchain.pem /etc/letsencrypt/live/k9e.no/privkey.pem > /etc/haproxy/certs/ssl-k9e.no.pem && \
|
||||
cat /etc/letsencrypt/live/planetposen.no/fullchain.pem /etc/letsencrypt/live/planetposen.no/privkey.pem > /etc/haproxy/certs/ssl-planetposen.no.pem && \
|
||||
cat /etc/letsencrypt/live/whoami.schleppe.cloud/fullchain.pem /etc/letsencrypt/live/whoami.schleppe.cloud/privkey.pem > /etc/haproxy/certs/ssl-whoami.schleppe.cloud.pem
|
||||
|
||||
systemctl restart haproxy.service
|
||||
```
|
||||
|
||||
Need to have a shared storage between all the instances, e.g. `etcd`.
|
||||
|
||||
@@ -1,12 +1,21 @@
|
||||
haproxy_traefik_ip:
|
||||
- "10.24.1.1"
|
||||
- "10.25.0.4"
|
||||
haproxy_traefik_port: 80
|
||||
haproxy_varnish_port: 80
|
||||
haproxy_cookie_value: "{{ inventory_hostname | default('server-1') }}"
|
||||
haproxy_dynamic_cookie_key: "mysecretphrase"
|
||||
haproxy_stats_auth: "admin:strongpassword"
|
||||
haproxy_certs_dir: "/etc/haproxy/certs"
|
||||
|
||||
certbot_cloudflare_secrets_dir: "/root/.secrets/certbot"
|
||||
certbot_cloudflare_ini_path: "/root/.secrets/certbot/cloudflare.ini"
|
||||
certbot_cloudflare_api_token: "REPLACE_WITH_REAL_TOKEN"
|
||||
haproxy_varnish_ip:
|
||||
- 10.24.2.1
|
||||
- 10.24.2.2
|
||||
- 10.25.2.1
|
||||
- 10.25.2.2
|
||||
haproxy_traefik_ip:
|
||||
- 10.24.3.6
|
||||
- 10.24.3.3
|
||||
- 10.25.3.4
|
||||
certbot_cloudflare_domains:
|
||||
- k9e.no
|
||||
- planetposen.no
|
||||
- whoami.schleppe.cloud
|
||||
|
||||
7
ansible/group_vars/varnish.yml
Normal file
7
ansible/group_vars/varnish.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
varnish_major: 60lts
|
||||
varnish_cfg_path: /etc/varnish
|
||||
haproxy_traefik_port: 80
|
||||
haproxy_traefik_ip:
|
||||
- 10.24.3.6
|
||||
- 10.24.3.3
|
||||
- 10.25.3.4
|
||||
@@ -3,5 +3,5 @@
|
||||
hosts: haproxy
|
||||
|
||||
roles:
|
||||
# - role: roles/certbot
|
||||
- role: roles/certbot
|
||||
- role: roles/haproxy
|
||||
|
||||
@@ -2,8 +2,8 @@
|
||||
- name: Install and configure systemd for varnish
|
||||
hosts: varnish
|
||||
roles:
|
||||
- role: roles/firewall
|
||||
enable_80_ufw_port: true
|
||||
enable_443_ufw_port: true
|
||||
|
||||
- role: roles/varnish
|
||||
# - role: roles/firewall
|
||||
# enable_80_ufw_port: true
|
||||
# enable_443_ufw_port: true
|
||||
#
|
||||
- role: roles/varnish
|
||||
|
||||
@@ -1,3 +1,13 @@
|
||||
certbot_email: kevin.midboe+ha.project@gmail.com
|
||||
certbot_secrets_dir: /root/.secrets/certbot
|
||||
combined_certs_dir: /etc/haproxy/certs
|
||||
combined_cert_prefix: "ssl-"
|
||||
|
||||
# Set true while testing to avoid LE rate limits
|
||||
certbot_use_staging: false
|
||||
le_renewal_window_seconds: 2592000
|
||||
certbot_throttle: 1
|
||||
|
||||
certbot_packages:
|
||||
- certbot
|
||||
- python3-certbot-dns-cloudflare
|
||||
|
||||
81
ansible/roles/certbot/tasks/issue_certs.yml
Normal file
81
ansible/roles/certbot/tasks/issue_certs.yml
Normal file
@@ -0,0 +1,81 @@
|
||||
---
|
||||
- name: Read Cloudflare secrets directory from environment (invalid by default)
|
||||
ansible.builtin.set_fact:
|
||||
cloudflare_api_key: >-
|
||||
{{ lookup('ansible.builtin.env', 'CLOUDFLARE_API_KEY')
|
||||
| default('__CLOUDFLARE_API_KEY_NOT_SET__', true) }}
|
||||
no_log: true
|
||||
|
||||
- name: Fail if CLOUDFLARE_API_KEY is not set
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- cloudflare_api_key != '__CLOUDFLARE_API_KEY_NOT_SET__'
|
||||
fail_msg: >
|
||||
CLOUDFLARE_API_KEY environment variable is required
|
||||
|
||||
- name: Validate dns_cloudflare_api_token looks sane
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- cloudflare_api_key is regex('[A-Za-z0-9]$')
|
||||
fail_msg: >
|
||||
must contain a valid
|
||||
CLOUDFLARE_API_KEY = <alphanumeric>
|
||||
no_log: false
|
||||
|
||||
- name: Ensure certbot secrets directory exists
|
||||
ansible.builtin.file:
|
||||
path: "{{ certbot_secrets_dir }}"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0700"
|
||||
|
||||
- name: Write Cloudflare credential file
|
||||
ansible.builtin.template:
|
||||
src: cloudflare.ini.j2
|
||||
dest: "{{ certbot_secrets_dir }}/certbot-cloudflare.ini"
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0600"
|
||||
no_log: true
|
||||
|
||||
- name: Ensure combined cert output directory exists
|
||||
ansible.builtin.file:
|
||||
path: "{{ combined_certs_dir }}"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0755"
|
||||
|
||||
# Request/renew: certbot is already idempotent-ish. We guard with `creates` to avoid
|
||||
# re-issuing on first provision runs; renewals happen via cron/systemd timer (recommended).
|
||||
- name: Obtain certificate via certbot dns-cloudflare (first issuance)
|
||||
ansible.builtin.command: >
|
||||
certbot certonly
|
||||
--agree-tos
|
||||
--non-interactive
|
||||
--email {{ certbot_email }}
|
||||
--dns-cloudflare
|
||||
--dns-cloudflare-credentials {{ certbot_secrets_dir }}/certbot-cloudflare.ini
|
||||
-d {{ item }}
|
||||
{% if certbot_use_staging %}--staging{% endif %}
|
||||
args:
|
||||
creates: "/etc/letsencrypt/live/{{ item }}/fullchain.pem"
|
||||
loop: "{{ certbot_cloudflare_domains | default([]) }}"
|
||||
register: certbot_issue
|
||||
changed_when: certbot_issue.rc == 0
|
||||
failed_when: certbot_issue.rc != 0
|
||||
async: 0
|
||||
|
||||
# Combine cert+key for Traefik/HAProxy-style PEM bundle
|
||||
- name: Combine fullchain + privkey into single PEM bundle
|
||||
ansible.builtin.shell: |
|
||||
set -euo pipefail
|
||||
cat \
|
||||
/etc/letsencrypt/live/{{ item }}/fullchain.pem \
|
||||
/etc/letsencrypt/live/{{ item }}/privkey.pem \
|
||||
> {{ combined_certs_dir }}/{{ combined_cert_prefix }}{{ item }}.pem
|
||||
chmod 0600 {{ combined_certs_dir }}/{{ combined_cert_prefix }}{{ item }}.pem
|
||||
args:
|
||||
executable: /bin/bash
|
||||
loop: "{{ certbot_cloudflare_domains | default([]) }}"
|
||||
@@ -1,3 +1,4 @@
|
||||
---
|
||||
- import_tasks: install.yml
|
||||
- import_tasks: secrets.yml
|
||||
# - import_tasks: issue_certs.yml
|
||||
|
||||
@@ -1 +1,2 @@
|
||||
dns_cloudflare_api_token = {{ certbot_cloudflare_api_token }}
|
||||
# Managed by ansible
|
||||
dns_cloudflare_api_token = {{ lookup('ansible.builtin.env', 'CLOUDFLARE_API_KEY') }}
|
||||
|
||||
@@ -35,13 +35,37 @@ defaults
|
||||
errorfile 503 /etc/haproxy/errors/503.http
|
||||
errorfile 504 /etc/haproxy/errors/504.http
|
||||
|
||||
# Front door: public HTTP
|
||||
frontend fe_http
|
||||
# Front door: main frontend dedicated to end users
|
||||
frontend ft_web
|
||||
bind :80
|
||||
|
||||
http-request set-header X-Forwarded-Proto https
|
||||
option forwardfor
|
||||
# Cache routing acl definitions
|
||||
acl static_content path_end .jpg .jpeg .gif .png .css .js .htm .html
|
||||
acl pseudo_static path_end .php ! path_beg /dynamic/
|
||||
acl image_php path_beg /images.php
|
||||
acl varnish_available nbsrv(bk_varnish_uri) ge 1
|
||||
|
||||
# Caches health detection + routing decision
|
||||
use_backend bk_varnish_uri if varnish_available static_content
|
||||
use_backend bk_varnish_uri if varnish_available pseudo_static
|
||||
use_backend bk_varnish_url_param if varnish_available image_php
|
||||
|
||||
# Read debug query parameter
|
||||
http-request set-var(txn.debug) urlp(debug)
|
||||
# Define what "debug enabled" means
|
||||
acl debug_enabled var(txn.debug) -m str -i 1 true yes on
|
||||
# Debug headers
|
||||
http-request set-var(txn.http_ver) req.ver
|
||||
http-response add-header X-HA-HTTP-Version %[var(txn.http_ver)] if debug_enabled
|
||||
http-response add-header X-HA-TLS-Version %[ssl_fc_protocol] if debug_enabled
|
||||
http-response add-header X-HA-Frontend %[fe_name] if debug_enabled
|
||||
http-response add-header X-HA-Backend %[be_name] if debug_enabled
|
||||
http-response add-header X-HA-Server %[srv_name] if debug_enabled
|
||||
http-response add-header X-HA-Server %[hostname] if debug_enabled
|
||||
http-response add-header X-Debug-Client-IP %[src] if debug_enabled
|
||||
http-response add-header Cache-Control no-store if debug_enabled
|
||||
|
||||
# dynamic content or all caches are unavailable
|
||||
default_backend be_traefik_http
|
||||
|
||||
# Front door: public HTTPS
|
||||
@@ -58,47 +82,45 @@ frontend fe_https
|
||||
# acl is_h2 ssl_fc_alpn -i h2
|
||||
# http-response set-header Alt-Svc "h3=\":443\"; ma=900" if is_h2
|
||||
|
||||
# =========================================================
|
||||
# Debug response headers (enabled via ?debug=1)
|
||||
# Cache routing acl definitions
|
||||
acl static_content path_end .jpg .jpeg .gif .png .css .js .htm .html
|
||||
acl pseudo_static path_end .php ! path_beg /dynamic/
|
||||
acl image_php path_beg /images.php
|
||||
acl varnish_available nbsrv(bk_varnish_uri) ge 1
|
||||
|
||||
# Caches health detection + routing decision
|
||||
use_backend bk_varnish_uri if varnish_available static_content
|
||||
use_backend bk_varnish_uri if varnish_available pseudo_static
|
||||
use_backend bk_varnish_url_param if varnish_available image_php
|
||||
|
||||
# Read debug query parameter
|
||||
http-request set-var(txn.debug) urlp(debug)
|
||||
|
||||
# Define what "debug enabled" means
|
||||
acl debug_enabled var(txn.debug) -m str -i 1 true yes on
|
||||
|
||||
# Debug headers
|
||||
http-request set-var(txn.http_ver) req.ver
|
||||
http-response add-header X-Debug-HTTP-Version %[var(txn.http_ver)] if debug_enabled
|
||||
http-response add-header X-Debug-Served-By haproxy-https if debug_enabled
|
||||
http-response add-header X-Debug-Frontend %[fe_name] if debug_enabled
|
||||
http-response add-header X-Debug-Backend %[be_name] if debug_enabled
|
||||
http-response add-header X-Debug-Server %[srv_name] if debug_enabled
|
||||
|
||||
# Client & network
|
||||
http-response add-header X-Debug-Client-IP %[src] if debug_enabled
|
||||
# http-response add-header X-Debug-Client-Port %[sp] if debug_enabled
|
||||
# http-response add-header X-Debug-XFF %[req.hdr(X-Forwarded-For)] if debug_enabled
|
||||
|
||||
# TLS / HTTPS details
|
||||
http-response add-header X-Debug-TLS %[ssl_fc] if debug_enabled
|
||||
http-response add-header X-Debug-TLS-Version %[ssl_fc_protocol] if debug_enabled
|
||||
http-response add-header X-Debug-TLS-Cipher %[ssl_fc_cipher] if debug_enabled
|
||||
|
||||
# Request identity & correlation
|
||||
http-response add-header X-Debug-Request-ID %[unique-id] if debug_enabled
|
||||
http-response add-header X-Debug-Method %[method] if debug_enabled
|
||||
|
||||
# Safety: prevent caching of debug responses
|
||||
http-response add-header Cache-Control no-store if debug_enabled
|
||||
http-response add-header X-HA-HTTP-Version %[var(txn.http_ver)] if debug_enabled
|
||||
http-response add-header X-HA-TLS-Version %[ssl_fc_protocol] if debug_enabled
|
||||
http-response add-header X-HA-Frontend %[fe_name] if debug_enabled
|
||||
http-response add-header X-HA-Backend %[be_name] if debug_enabled
|
||||
http-response add-header X-HA-Server %[srv_name] if debug_enabled
|
||||
http-response add-header X-HA-Server %[hostname] if debug_enabled
|
||||
http-response add-header X-Debug-Client-IP %[src] if debug_enabled
|
||||
http-response add-header Cache-Control no-store if debug_enabled
|
||||
|
||||
# dynamic content or all caches are unavailable
|
||||
default_backend be_traefik_http
|
||||
|
||||
|
||||
# Backend: Traefik VM
|
||||
backend be_traefik_http
|
||||
mode http
|
||||
balance roundrobin
|
||||
cookie LB_SERVER insert indirect nocache dynamic
|
||||
# app servers must say if everything is fine on their side
|
||||
# and they can process requests
|
||||
option httpchk
|
||||
option httpchk GET /appcheck
|
||||
http-check expect rstring [oO][kK]
|
||||
cookie LB_SERVER insert indirect nocache
|
||||
dynamic-cookie-key {{ haproxy_dynamic_cookie_key }}
|
||||
|
||||
# Health check: Traefik should respond with 404 for unknown host; that's still "alive".
|
||||
@@ -109,6 +131,39 @@ backend be_traefik_http
|
||||
server traefik{{ loop.index }} {{ ip }}:{{ haproxy_traefik_port }} check cookie {{ haproxy_cookie_value }}
|
||||
{% endfor %}
|
||||
|
||||
# VARNISH
|
||||
# static backend with balance based on the uri, including the query string
|
||||
# to avoid caching an object on several caches
|
||||
backend bk_varnish_uri
|
||||
balance uri # in latest HAProxy version, one can add 'whole' keyword
|
||||
|
||||
# Varnish must tell it's ready to accept traffic
|
||||
option httpchk HEAD /varnishcheck
|
||||
http-check expect status 200
|
||||
|
||||
# client IP information
|
||||
option forwardfor
|
||||
|
||||
# avoid request redistribution when the number of caches changes (crash or start up)
|
||||
hash-type consistent
|
||||
{% for ip in haproxy_varnish_ip %}
|
||||
server varnish{{ loop.index }} {{ ip }}:{{ haproxy_varnish_port }} check
|
||||
{% endfor %}
|
||||
|
||||
# cache backend with balance based on the value of the URL parameter called "id"
|
||||
# to avoid caching an object on several caches
|
||||
backend bk_varnish_url_param
|
||||
balance url_param id
|
||||
|
||||
# client IP information
|
||||
option forwardfor
|
||||
|
||||
# avoid request redistribution when the number of caches changes (crash or start up)
|
||||
hash-type consistent
|
||||
{% for ip in haproxy_varnish_ip %}
|
||||
server varnish{{ loop.index }} {{ ip }}:{{ haproxy_varnish_port }} track bk_varnish_uri/varnish{{ loop.index }}
|
||||
{% endfor %}
|
||||
|
||||
# Frontend: HAProxy prometheus exporter metrics
|
||||
frontend fe_metrics
|
||||
bind :8405
|
||||
|
||||
6
ansible/roles/varnish/handlers/main.yml
Normal file
6
ansible/roles/varnish/handlers/main.yml
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: reload varnish
|
||||
service:
|
||||
name: varnish
|
||||
state: reloaded
|
||||
|
||||
46
ansible/roles/varnish/tasks/copy-source.yml
Normal file
46
ansible/roles/varnish/tasks/copy-source.yml
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
- file:
|
||||
path: "/etc/varnish"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0755"
|
||||
|
||||
- template:
|
||||
src: default.vcl.j2
|
||||
dest: "{{ varnish_cfg_path }}/default.vcl"
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0644"
|
||||
# validate: "haproxy -c -f %s"
|
||||
notify: reload varnish
|
||||
|
||||
- template:
|
||||
src: vcl_deliver.vcl.j2
|
||||
dest: "{{ varnish_cfg_path }}/vcl_deliver.vcl"
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0644"
|
||||
# validate: "haproxy -c -f %s"
|
||||
notify: reload varnish
|
||||
|
||||
- file:
|
||||
path: "/etc/varnish/includes"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0755"
|
||||
|
||||
- template:
|
||||
src: includes/x-cache-header.vcl.j2
|
||||
dest: "{{ varnish_cfg_path }}/includes/x-cache-header.vcl"
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0644"
|
||||
# validate: "haproxy -c -f %s"
|
||||
notify: reload varnish
|
||||
|
||||
- service:
|
||||
name: varnish
|
||||
state: restarted
|
||||
|
||||
113
ansible/roles/varnish/tasks/install.yml
Normal file
113
ansible/roles/varnish/tasks/install.yml
Normal file
@@ -0,0 +1,113 @@
|
||||
---
|
||||
- name: Ensure apt cache is up to date (pre)
|
||||
ansible.builtin.apt:
|
||||
update_cache: true
|
||||
cache_valid_time: 3600
|
||||
|
||||
- name: Debian only - ensure debian-archive-keyring is installed
|
||||
ansible.builtin.apt:
|
||||
name: debian-archive-keyring
|
||||
state: present
|
||||
when: ansible_facts.distribution == "Debian"
|
||||
|
||||
- name: Ensure required tools are installed (curl, gnupg, apt-transport-https)
|
||||
ansible.builtin.apt:
|
||||
name:
|
||||
- curl
|
||||
- gnupg
|
||||
- apt-transport-https
|
||||
state: present
|
||||
|
||||
# Packagecloud repo parameters:
|
||||
# os = "debian" or "ubuntu"
|
||||
# dist = codename (e.g. bookworm, bullseye, focal, jammy, noble)
|
||||
# :contentReference[oaicite:1]{index=1}
|
||||
- name: Set packagecloud repo parameters
|
||||
ansible.builtin.set_fact:
|
||||
varnish_pkgcloud_os: "{{ 'ubuntu' if ansible_facts.distribution == 'Ubuntu' else 'debian' }}"
|
||||
varnish_pkgcloud_dist: "bookworm"
|
||||
# varnish_pkgcloud_dist: "{{ ansible_facts.distribution_release }}"
|
||||
|
||||
# ---- apt >= 1.1 path (keyrings + signed-by) ----
|
||||
- name: Ensure /etc/apt/keyrings exists
|
||||
ansible.builtin.file:
|
||||
path: /etc/apt/keyrings
|
||||
state: directory
|
||||
mode: "0755"
|
||||
|
||||
- name: Download packagecloud GPG key (ascii)
|
||||
ansible.builtin.get_url:
|
||||
url: https://packagecloud.io/varnishcache/varnish{{ varnish_major }}/gpgkey
|
||||
dest: /tmp/varnishcache_varnish{{ varnish_major }}.gpgkey
|
||||
mode: "0644"
|
||||
|
||||
- name: Dearmor packagecloud key into /etc/apt/keyrings
|
||||
ansible.builtin.command: >
|
||||
gpg --dearmor -o /etc/apt/keyrings/varnishcache_varnish{{ varnish_major }}-archive-keyring.gpg
|
||||
/tmp/varnishcache_varnish{{ varnish_major }}.gpgkey
|
||||
args:
|
||||
creates: /etc/apt/keyrings/varnishcache_varnish{{ varnish_major }}-archive-keyring.gpg
|
||||
|
||||
- name: Ensure Sequoia crypto-policy directory exists
|
||||
ansible.builtin.file:
|
||||
path: /etc/crypto-policies/back-ends
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0755"
|
||||
|
||||
- name: Allow SHA1 signatures for sequoia (packagecloud compatibility)
|
||||
ansible.builtin.copy:
|
||||
dest: /etc/crypto-policies/back-ends/sequoia.config
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0644"
|
||||
backup: true
|
||||
content: |
|
||||
[hash_algorithms]
|
||||
sha1 = "always"
|
||||
|
||||
- name: Add Varnish 6.0 LTS repo
|
||||
ansible.builtin.apt_repository:
|
||||
repo: "deb [signed-by=/etc/apt/keyrings/varnishcache_varnish{{ varnish_major }}-archive-keyring.gpg] https://packagecloud.io/varnishcache/varnish60lts/{{ varnish_pkgcloud_os }}/ {{ varnish_pkgcloud_dist }} main"
|
||||
filename: varnishcache_varnish{{ varnish_major }}
|
||||
state: present
|
||||
|
||||
- name: Add Varnish 6.0 LTS source repo (optional)
|
||||
ansible.builtin.apt_repository:
|
||||
repo: "deb-src [signed-by=/etc/apt/keyrings/varnishcache_varnish{{ varnish_major }}-archive-keyring.gpg] https://packagecloud.io/varnishcache/varnish60lts/{{ varnish_pkgcloud_os }}/ {{ varnish_pkgcloud_dist }} main"
|
||||
filename: varnishcache_varnish{{ varnish_major }}
|
||||
state: present
|
||||
when:
|
||||
- varnish_enable_deb_src | default(false)
|
||||
|
||||
- name: Update apt cache (after adding repo)
|
||||
ansible.builtin.apt:
|
||||
update_cache: true
|
||||
|
||||
- name: Install Varnish Cache 6.0 LTS
|
||||
ansible.builtin.apt:
|
||||
name: "{{ varnish_packages | default(['varnish']) }}"
|
||||
state: present
|
||||
|
||||
|
||||
- name: Copy systemd template
|
||||
become: true
|
||||
ansible.builtin.template:
|
||||
src: varnish-systemd.j2
|
||||
dest: /lib/systemd/system/varnish.service
|
||||
owner: root
|
||||
mode: "0644"
|
||||
|
||||
- name: Restart systemd daemon
|
||||
become: true
|
||||
ansible.builtin.systemd:
|
||||
daemon_reload: yes
|
||||
|
||||
- name: Reload varnish service
|
||||
become: true
|
||||
ansible.builtin.systemd:
|
||||
name: varnish.service
|
||||
state: reloaded
|
||||
|
||||
|
||||
@@ -1,57 +1,2 @@
|
||||
---
|
||||
- name: update apt
|
||||
become: true
|
||||
apt:
|
||||
update_cache: yes
|
||||
cache_valid_time: 86400
|
||||
|
||||
- name: install required packages
|
||||
package:
|
||||
name:
|
||||
- debian-archive-keyring
|
||||
- curl
|
||||
- gnupg
|
||||
- apt-transport-https
|
||||
|
||||
- name: add varnish apt key & repo
|
||||
block:
|
||||
- name: add varnish key
|
||||
apt_key:
|
||||
url: https://packagecloud.io/varnishcache/varnish60lts/gpgkey
|
||||
state: present
|
||||
|
||||
- name: add varnish repo
|
||||
apt_repository:
|
||||
repo: 'deb https://packagecloud.io/varnishcache/varnish60lts/{{ varnish_release }} {{ varnish_release_codename }} main'
|
||||
state: present
|
||||
|
||||
- name: add varnish repo src
|
||||
apt_repository:
|
||||
repo: 'deb-src https://packagecloud.io/varnishcache/varnish60lts/{{ varnish_release }} {{ varnish_release_codename }} main'
|
||||
state: present
|
||||
|
||||
- name: update apt
|
||||
become: true
|
||||
apt:
|
||||
update_cache: yes
|
||||
cache_valid_time: 86400
|
||||
|
||||
- name: install varnish package
|
||||
package:
|
||||
name: varnish
|
||||
|
||||
- name: copy systemd template
|
||||
template:
|
||||
src: varnish-systemd.j2
|
||||
dest: /lib/systemd/system/varnish.service
|
||||
owner: root
|
||||
mode: 644
|
||||
|
||||
- name: restart systemd daemon
|
||||
systemd:
|
||||
daemon_reload: yes
|
||||
|
||||
- name: restart varnish service
|
||||
systemd:
|
||||
name: varnish.service
|
||||
state: reloaded
|
||||
- import_tasks: install.yml
|
||||
- import_tasks: copy-source.yml
|
||||
|
||||
206
ansible/roles/varnish/templates/default.vcl.j2
Normal file
206
ansible/roles/varnish/templates/default.vcl.j2
Normal file
@@ -0,0 +1,206 @@
|
||||
vcl 4.1;
|
||||
|
||||
import std;
|
||||
import directors;
|
||||
|
||||
include "vcl_deliver.vcl";
|
||||
include "includes/x-cache-header.vcl";
|
||||
|
||||
{% for ip in haproxy_traefik_ip %}
|
||||
backend bk_appsrv_static-{{ loop.index }} {
|
||||
.host = "{{ ip }}";
|
||||
.port = "{{ haproxy_traefik_port }}";
|
||||
.connect_timeout = 3s;
|
||||
.first_byte_timeout = 10s;
|
||||
.between_bytes_timeout = 5s;
|
||||
.probe = {
|
||||
.url = "/ping";
|
||||
.expected_response = 404;
|
||||
.timeout = 1s;
|
||||
.interval = 3s;
|
||||
.window = 2;
|
||||
.threshold = 2;
|
||||
.initial = 2;
|
||||
}
|
||||
}
|
||||
|
||||
{% endfor %}
|
||||
|
||||
/*
|
||||
* Who is allowed to PURGE
|
||||
*/
|
||||
acl purge {
|
||||
"127.0.0.1";
|
||||
"localhost";
|
||||
# add your admin / app hosts here
|
||||
}
|
||||
|
||||
sub vcl_init {
|
||||
new vdir = directors.round_robin();
|
||||
{% for ip in haproxy_traefik_ip %}
|
||||
vdir.add_backend(bk_appsrv_static-{{ loop.index }});
|
||||
{% endfor %}
|
||||
}
|
||||
|
||||
sub vcl_recv {
|
||||
### Default options
|
||||
|
||||
# Health Checking
|
||||
if (req.url == "/varnishcheck") {
|
||||
return (synth(200, "health check OK!"));
|
||||
}
|
||||
|
||||
# Set default backend
|
||||
set req.backend_hint = vdir.backend();
|
||||
|
||||
# grace period (stale content delivery while revalidating)
|
||||
set req.grace = 30s;
|
||||
|
||||
# Purge request
|
||||
if (req.method == "PURGE") {
|
||||
if (client.ip !~ purge) {
|
||||
return (synth(405, "Not allowed."));
|
||||
}
|
||||
return (purge);
|
||||
}
|
||||
|
||||
# Accept-Encoding header clean-up
|
||||
if (req.http.Accept-Encoding) {
|
||||
# use gzip when possible, otherwise use deflate
|
||||
if (req.http.Accept-Encoding ~ "gzip") {
|
||||
set req.http.Accept-Encoding = "gzip";
|
||||
} elsif (req.http.Accept-Encoding ~ "deflate") {
|
||||
set req.http.Accept-Encoding = "deflate";
|
||||
} else {
|
||||
# unknown algorithm, remove accept-encoding header
|
||||
unset req.http.Accept-Encoding;
|
||||
}
|
||||
|
||||
# Microsoft Internet Explorer 6 is well know to be buggy with compression and css / js
|
||||
if (req.url ~ "\.(css|js)(\?.*)?$" && req.http.User-Agent ~ "MSIE 6") {
|
||||
unset req.http.Accept-Encoding;
|
||||
}
|
||||
}
|
||||
|
||||
# Enable debug headers through query param
|
||||
if (req.url ~ "(?i)debug=(true|yes|1)") {
|
||||
set req.http.X-debug = true;
|
||||
}
|
||||
|
||||
### Per host/application configuration
|
||||
# bk_appsrv_static
|
||||
# Stale content delivery
|
||||
if (std.healthy(req.backend_hint)) {
|
||||
set req.grace = 30s;
|
||||
} else {
|
||||
set req.grace = 1d;
|
||||
}
|
||||
|
||||
# Cookie ignored in these static pages
|
||||
unset req.http.Cookie;
|
||||
|
||||
### Common options
|
||||
# Static objects are first looked up in the cache
|
||||
if (req.url ~ "\.(png|gif|jpg|swf|css|js)(\?.*)?$") {
|
||||
return (hash);
|
||||
}
|
||||
|
||||
# Default: look for the object in cache
|
||||
return (hash);
|
||||
}
|
||||
|
||||
sub vcl_hash {
|
||||
hash_data(req.url);
|
||||
|
||||
if (req.http.host) {
|
||||
hash_data(req.http.host);
|
||||
} else {
|
||||
hash_data(server.ip);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Called after a successful PURGE
|
||||
*/
|
||||
sub vcl_purge {
|
||||
return (synth(200, "Purged."));
|
||||
}
|
||||
|
||||
sub vcl_backend_response {
|
||||
# Stale content delivery
|
||||
set beresp.grace = 1d;
|
||||
|
||||
# Hide Server information
|
||||
unset beresp.http.Server;
|
||||
|
||||
# Store compressed objects in memory (gzip at fetch time)
|
||||
# Varnish can deliver gunzipped/gzipped depending on client support
|
||||
if (beresp.http.Content-Type ~ "(?i)(text|application)") {
|
||||
set beresp.do_gzip = true;
|
||||
}
|
||||
|
||||
###################
|
||||
# cache rules #
|
||||
###################
|
||||
# HTML pages → short cache or no cache
|
||||
if (bereq.url ~ "\.html$") {
|
||||
set beresp.ttl = 30s; # Cache briefly
|
||||
set beresp.uncacheable = true; # Or disable cache entirely
|
||||
}
|
||||
|
||||
# JavaScript & CSS → long cache
|
||||
if (bereq.url ~ "\.(js|css)$") {
|
||||
set beresp.ttl = 1d;
|
||||
}
|
||||
|
||||
# Images under /image/ → long cache
|
||||
if (bereq.url ~ "^/images/.*\.(svg|png|jpe?g)$") {
|
||||
set beresp.ttl = 1y;
|
||||
}
|
||||
|
||||
# Favicons → long cache
|
||||
if (bereq.url ~ "^/favicons/") {
|
||||
set beresp.ttl = 1y;
|
||||
}
|
||||
|
||||
# Fallback: ensure some cache
|
||||
if (beresp.ttl <= 0s) {
|
||||
set beresp.ttl = 22s;
|
||||
}
|
||||
|
||||
set beresp.http.X-TTL = beresp.ttl;
|
||||
|
||||
# remove any cookie on static or pseudo-static objects
|
||||
unset beresp.http.Set-Cookie;
|
||||
|
||||
return (deliver);
|
||||
}
|
||||
|
||||
sub vcl_deliver {
|
||||
# unset resp.http.Via;
|
||||
unset resp.http.X-Varnish;
|
||||
|
||||
# Handle conditional request with ETag
|
||||
if (
|
||||
req.http.If-None-Match &&
|
||||
req.http.If-None-Match == resp.http.ETag
|
||||
) {
|
||||
return (synth(304));
|
||||
}
|
||||
|
||||
return (deliver);
|
||||
}
|
||||
|
||||
sub vcl_synth {
|
||||
if (resp.status == 304) {
|
||||
set resp.http.ETag = req.http.If-None-Match;
|
||||
# set resp.http.Content-Length = "0";
|
||||
return (deliver);
|
||||
}
|
||||
|
||||
# Keep defaults; this replaces the old vcl_error.
|
||||
# (Your old "obj.status == 751" special case isn't referenced anywhere
|
||||
# in the provided VCL, so it was dropped.)
|
||||
return (deliver);
|
||||
}
|
||||
|
||||
@@ -0,0 +1,43 @@
|
||||
sub vcl_recv {
|
||||
unset req.http.X-Cache;
|
||||
}
|
||||
|
||||
sub vcl_hit {
|
||||
set req.http.X-Cache = "HIT";
|
||||
}
|
||||
|
||||
sub vcl_miss {
|
||||
set req.http.X-Cache = "MISS";
|
||||
}
|
||||
|
||||
sub vcl_pass {
|
||||
set req.http.X-Cache = "PASS";
|
||||
}
|
||||
|
||||
sub vcl_pipe {
|
||||
set req.http.X-Cache = "PIPE uncacheable";
|
||||
}
|
||||
|
||||
sub vcl_synth {
|
||||
set resp.http.X-Cache = "SYNTH";
|
||||
}
|
||||
|
||||
sub vcl_deliver {
|
||||
if (obj.uncacheable) {
|
||||
set req.http.X-Cache = req.http.X-Cache + " uncacheable" ;
|
||||
} else {
|
||||
set req.http.X-Cache = req.http.X-Cache + " cached" + " (real age: " + resp.http.Age + ", hits: " + obj.hits + ", ttl: " + regsub(resp.http.x-ttl, "\..*", "") + ")";
|
||||
}
|
||||
|
||||
# if we are gracing, make sure the browser doesn't cache things, and set our maxage to 1
|
||||
# also log grace delivery
|
||||
if (req.http.graceineffect) {
|
||||
set resp.http.Cache-Control = regsub(resp.http.Cache-Control, "max-age=[0-9]*", "max-age=1");
|
||||
set resp.http.Cache-Control = regsub(resp.http.Cache-Control, "channel-maxage=[0-9]*", "channel-maxage=1");
|
||||
set req.http.X-Cache = req.http.X-Cache + " [grace: " + req.http.graceineffect + " " + req.http.grace + ", remaining: " + req.http.graceduration + "]";
|
||||
}
|
||||
|
||||
# uncomment the following line to show the information in the response
|
||||
set resp.http.X-Cache = req.http.X-Cache;
|
||||
}
|
||||
|
||||
40
ansible/roles/varnish/templates/vcl_deliver.vcl.j2
Normal file
40
ansible/roles/varnish/templates/vcl_deliver.vcl.j2
Normal file
@@ -0,0 +1,40 @@
|
||||
sub vcl_deliver {
|
||||
# Happens when we have all the pieces we need, and are about to send the
|
||||
# response to the client.
|
||||
|
||||
if (resp.status == 503) {
|
||||
set resp.http.failing-backend = "true";
|
||||
}
|
||||
|
||||
# Give some debug
|
||||
if (req.http.X-debug && req.esi_level == 0) {
|
||||
set resp.http.X-Backend = req.backend_hint;
|
||||
set resp.http.X-Backend-Url = req.url;
|
||||
set resp.http.X-Varnish-Server = server.hostname;
|
||||
} else {
|
||||
# not debug, strip some headers
|
||||
unset resp.http.X-Cache;
|
||||
unset resp.http.X-Backend;
|
||||
unset resp.http.x-upstream;
|
||||
unset resp.http.x-request-uri;
|
||||
unset resp.http.Via;
|
||||
unset resp.http.xkey;
|
||||
unset resp.http.x-goog-hash;
|
||||
unset resp.http.x-goog-generation;
|
||||
unset resp.http.X-GUploader-UploadID;
|
||||
unset resp.http.x-goog-storage-class;
|
||||
unset resp.http.x-goog-metageneration;
|
||||
unset resp.http.x-goog-stored-content-length;
|
||||
unset resp.http.x-goog-stored-content-encoding;
|
||||
unset resp.http.x-goog-meta-goog-reserved-file-mtime;
|
||||
unset resp.http.Server;
|
||||
unset resp.http.X-Apache-Host;
|
||||
unset resp.http.X-Varnish-Backend;
|
||||
unset resp.http.X-Varnish-Host;
|
||||
unset resp.http.X-Nginx-Host;
|
||||
unset resp.http.X-Upstream-Age;
|
||||
unset resp.http.X-Retries;
|
||||
unset resp.http.X-Varnish;
|
||||
}
|
||||
}
|
||||
|
||||
43
ansible/scripts/generate-inventory.sh
Executable file
43
ansible/scripts/generate-inventory.sh
Executable file
@@ -0,0 +1,43 @@
|
||||
#!/usr/local/bin/bash
|
||||
#
|
||||
# Usage: ./scripts/generate-inventory.sh | pbcopy
|
||||
|
||||
cd ../hetzner-pulumi
|
||||
pulumi stack output --json | jq -r '
|
||||
# extract dc (nbg / va) positionally from hostname
|
||||
def dc:
|
||||
(.name | capture("-(?<dc>nbg|hel|ash|va)[0-9]*-").dc);
|
||||
|
||||
def region:
|
||||
if dc == "nbg" then "eu" else "us" end;
|
||||
|
||||
def pad($n):
|
||||
tostring as $s
|
||||
| ($n - ($s|length)) as $k
|
||||
| if $k > 0 then ($s + (" " * $k)) else $s end;
|
||||
|
||||
.inventory.vms
|
||||
| map({
|
||||
region: region,
|
||||
role: (.name | split("-")[0]),
|
||||
idx: (.name | capture("-(?<n>[0-9]+)$").n),
|
||||
ip: .publicIpv4,
|
||||
dc: dc
|
||||
})
|
||||
| group_by(.region)
|
||||
| .[]
|
||||
| .[0].region as $r
|
||||
| "[\($r)]",
|
||||
(
|
||||
sort_by(.role, (.idx | tonumber))
|
||||
| .[]
|
||||
| (
|
||||
("\(.role)-\(.dc)-\(.idx)" | pad(15)) +
|
||||
("ansible_host=\(.ip)" | pad(30)) +
|
||||
("ansible_port=22" | pad(18)) +
|
||||
"ansible_user=root"
|
||||
)
|
||||
),
|
||||
""
|
||||
'
|
||||
|
||||
14
ansible/scripts/update-config_certbot-domains.sh
Normal file
14
ansible/scripts/update-config_certbot-domains.sh
Normal file
@@ -0,0 +1,14 @@
|
||||
#!/usr/local/bin/bash
|
||||
#
|
||||
# Usage: ./scripts/update-config_certbot-domains.sh | pbcopy
|
||||
|
||||
CERTBOT_EXPORT_KEY=certbot_cloudflare_domains
|
||||
|
||||
EXPORT_VARIABLES="$(pwd)/group_vars/haproxy.yml"
|
||||
yq -i 'del(.certbot_cloudflare_domains)' $EXPORT_VARIABLES
|
||||
|
||||
cd ../hetzner-pulumi
|
||||
pulumi stack output --json | jq -r --arg key $CERTBOT_EXPORT_KEY '
|
||||
($key + ":\n") +
|
||||
(.inventory.domains | map(" - " + .) | join("\n"))
|
||||
' >> $EXPORT_VARIABLES
|
||||
20
ansible/scripts/update-config_varnish-ips.sh
Normal file
20
ansible/scripts/update-config_varnish-ips.sh
Normal file
@@ -0,0 +1,20 @@
|
||||
#!/usr/local/bin/bash
|
||||
#
|
||||
# Usage: ./scripts/update-config_varnishserver-ips.sh
|
||||
|
||||
IP_EXPORT_KEY=haproxy_varnish_ip
|
||||
ANSIBLE_DIR="$(pwd)"
|
||||
PULIMI_DIR="$(pwd)/../hetzner-pulumi"
|
||||
|
||||
EXPORT_VARIABLES="$(pwd)/group_vars/haproxy.yml"
|
||||
yq -i 'del(.haproxy_varnish_ip)' $EXPORT_VARIABLES
|
||||
|
||||
cd $PULIMI_DIR
|
||||
pulumi stack output --json | jq -r --arg key $IP_EXPORT_KEY '
|
||||
def varnish_private_ips:
|
||||
.inventory.vms
|
||||
| map(select(.name | startswith("varnish")) | .privateIp);
|
||||
|
||||
($key + ":\n") +
|
||||
(varnish_private_ips | map(" - " + .) | join("\n"))
|
||||
' >> $EXPORT_VARIABLES
|
||||
35
ansible/scripts/update-config_webserver-ips.sh
Normal file
35
ansible/scripts/update-config_webserver-ips.sh
Normal file
@@ -0,0 +1,35 @@
|
||||
#!/usr/local/bin/bash
|
||||
#
|
||||
# Usage: ./scripts/update-config_webserver-ips.sh
|
||||
|
||||
IP_EXPORT_KEY=haproxy_traefik_ip
|
||||
ANSIBLE_DIR="$(pwd)"
|
||||
PULIMI_DIR="$(pwd)/../hetzner-pulumi"
|
||||
|
||||
EXPORT_VARIABLES="$(pwd)/group_vars/haproxy.yml"
|
||||
yq -i 'del(.haproxy_traefik_ip)' $EXPORT_VARIABLES
|
||||
|
||||
cd ../hetzner-pulumi
|
||||
pulumi stack output --json | jq -r --arg key $IP_EXPORT_KEY '
|
||||
def web_private_ips:
|
||||
.inventory.vms
|
||||
| map(select(.name | startswith("web")) | .privateIp);
|
||||
|
||||
($key + ":\n") +
|
||||
(web_private_ips | map(" - " + .) | join("\n"))
|
||||
' >> $EXPORT_VARIABLES
|
||||
|
||||
cd $ANSIBLE_DIR
|
||||
EXPORT_VARIABLES="$(pwd)/group_vars/varnish.yml"
|
||||
yq -i 'del(.haproxy_traefik_ip)' $EXPORT_VARIABLES
|
||||
|
||||
cd $PULIMI_DIR
|
||||
pulumi stack output --json | jq -r --arg key $IP_EXPORT_KEY '
|
||||
def varnish_private_ips:
|
||||
.inventory.vms
|
||||
| map(select(.name | startswith("web")) | .privateIp);
|
||||
|
||||
($key + ":\n") +
|
||||
(varnish_private_ips | map(" - " + .) | join("\n"))
|
||||
' >> $EXPORT_VARIABLES
|
||||
|
||||
Reference in New Issue
Block a user