Cert Manager

Want to configure a wilcard certificate using LetsEncrypt on selected domain. This assumes use of Cloudflare for DNS management. We also want to keep our traditonal APISIX deployment (as we will use API calls for other experiments) which requires us to create a synchronisation job that pushes cert to APISIX, service account etc. As usual, scripts are heavily generated using AI.

Install Cert manager

microk8s enable cert-manager

Create API token in CF

Navigate to Account API Tokens (under api-tokens URL) and create a DNS token. Take note of the token as it will not be visible later through Cloudflare UI (like mist API tokens).

Store the token as K8s secret

Run

kubectl create secret generic cloudflare-api-token-secret \
  --from-literal=api-token='<<YOUR_CLOUDFLARE_API_TOKEN>>' \
  -n cert-manager

replacing <<YOUR_CLOUDFLARE_API_TOKEN>> with the actual token.

Configure cert-manager to download certs

Run

kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-dns01
spec:
  acme:
    email: <<MYEMAIL>>
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-dns01-account-key
    solvers:
    - dns01:
        cloudflare:
          email: <<CF-LOGIN>
          apiTokenSecretRef:
            name: cloudflare-api-token-secret
            key: api-token
EOF

replacing:

  • «MYEMAIL>> with your email, and
  • «CF-LOGIN» using your login for Cloudflare.

Define the certificate

Run

kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: <<WILDCARD-NAME>>
  namespace: default
spec:
  secretName: <<WILDCARD-DOMAIN-TLS-NAME>>
  dnsNames:
    - "*.<<WILDCARD-DOMAIN>>"
  issuerRef:
    name: letsencrypt-dns01
    kind: ClusterIssuer
EOF

replacing:

  • «WILDCARD-DOMAIN» with your domain, and
  • «WILDCARD-DOMAIN-TLS-NAME» the name of the secret.
  • «WILDCARD-NAME» the name used to reference wildcard in APISIX SSL config.

Certificate synchronisation pre-requisites

Define secret to be used to upload SSL cert to APISIX

Load admin key for APISIX. Note that ideally, a separate account should be created but sa this is all pure development, we will take some shortcuts.

cat << EOF | envsubst | kubectl apply -f - 
apiVersion: v1
kind: Secret
metadata:
  name: apisix-admin-secret
  namespace: default
stringData:
  admin-api-key: $APISIX_ADMIN_TOKEN
EOF

Create service account

Create service account for synchronising cert as downloaded by cert-manager to APISIX.

kubectl apply -f - <<'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cert-sync
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cert-sync
  namespace: default
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cert-sync
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cert-sync
subjects:
- kind: ServiceAccount
  name: cert-sync
  namespace: default
EOF

Enable SSL with APISIX

In values.yaml, make sure that ssl is enabled e.g.

  apisix: 
    
    ssl:
      enabled: true

Also, we will fix nodePort on service so we can address it from OCI LB as below:

    tls:
      servicePort: 443
      nodePort: 31640

Update apisix deployment as here.

Define cronjob pushing cert-manager downloaded cert to APISIX

Run

kubectl apply -f - <<'EOF'
apiVersion: batch/v1
kind: CronJob
metadata:
  name: apisix-wildcard-cert-sync
  namespace: default
spec:
  schedule: "0 3 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: cert-sync
          restartPolicy: OnFailure
          dnsPolicy: ClusterFirst
          containers:
          - name: cert-sync
            image: heyvaldemar/aws-kubectl
            env:
              - name: SECRET_NAME
                value: <<WILDCARD-DOMAIN-TLS-NAME>>
              - name: DOMAIN
                value: "*.<<WILDCARD-DOMAIN>>""
              - name: APISIX_ADMIN
                value: http://apisix-admin.apisix.svc.cluster.local:9180
              - name: API_KEY
                valueFrom:
                  secretKeyRef:
                    name: apisix-admin-secret
                    key: admin-api-key
            command:
              - /bin/bash
              - -c
              - |
                set -euo pipefail

                echo "🔄 Syncing certificate for domain: ${DOMAIN}"

                CERT=$(kubectl get secret "${SECRET_NAME}" -n default -o jsonpath='{.data.tls\.crt}' | base64 -d)
                KEY=$(kubectl get secret "${SECRET_NAME}" -n default -o jsonpath='{.data.tls\.key}' | base64 -d)

                # Build JSON payload safely (requires jq in image)
                jq -n \
                  --arg cert "$CERT" \
                  --arg key "$KEY" \
                  --arg domain "$DOMAIN" \
                  '{cert: $cert, key: $key, snis: [$domain]}' > /tmp/payload.json

                echo "➡️ Pushing certificate to APISIX (endpoint: /apisix/admin/ssls/)..."
                HTTP_CODE=$(curl -s -o /tmp/resp.json -w "%{http_code}" \
                  -X PUT "${APISIX_ADMIN}/apisix/admin/ssls/<<WILDCARD-NAME>>" \
                  -H "X-API-KEY: ${API_KEY}" \
                  -H "Content-Type: application/json" \
                  -d @/tmp/payload.json || true)

                echo "HTTP status: ${HTTP_CODE}"
                echo "Response body:"
                cat /tmp/resp.json
EOF

replacing:

  • «WILDCARD-DOMAIN-TLS-NAME»
  • «WILDCARD-DOMAIN»

Note: this specific step required lots of debug and AI tools did not really help a lot.

Update the route to use SSL

Run

curl http://127.0.0.1:9180/apisix/admin/routes/1 -H "X-API-KEY: $APISIX_ADMIN_TOKEN" -X PUT -i -d '{
        "name": "1",
        "status": 1,
        "id": "1",
        "enable_websocket": false,
        "priority": 0,
        "uri": "/deck-api/*",
        "host": "deck-api.<<WILDCARD-DOMAIN>>",
        "methods": [
          "GET",
          "POST",
          "PUT",
          "DELETE"
        ],
        "upstream_id": "1"
}'

replacing «WILDCARD-DOMAIN».

Test

To test, map the service port to 9443 and then run the following:

curl -X POST https://deck-api.<<WILDCARD-DOMAIN>>:9443/deck-api/deck/create -H 'Content-Type: application/json'  -d '{
"deckType": "standard52"
}' --resolve deck-api.<<WILDCARD-DOMAIN>>:9443:127.0.0.1

Enable external SSL access:

We took a shortcut here using NodePort service and using OCI manually configured LoadBalancer as here:

Add details

Select Public visibility Reserved IP address Under Choose networking select your network. Under Subnet select your public subnet

Choose backends

On Add backends - select all 4 nodes Health Check: TCP Port: 31640

Configure listener

Specify the type of traffic your listener handles: TCP Port 433 Uncheck Use SSL - as we will let APISIX manage it.

Manage Logging

Whatever suits but select Request ID and use WWWWW header name.

Open external access to 443

For TCP traffic for ports 443, open external access from all IPs (0.0.0.0/0).

Map DNS records for your api

Map <<WILDCARD-DOMAIN>> using A records to external IPs of your LB e.g. *.api A 1.1.1.1 where 1.1.1.1 is your external IP.

Do NOT proxy as CF SSL cert will not cover it.
Note: we do have a certificate on APISIX so we can upload it to CF.

And that is it. All requests should now be hitting APISIX from a public IP.

Issues

Some of the issues with this setup are as follows:

  • DNS records should take the advantage of CF Proxying which in turn would utilise their DDOS protection.
  • Use DNS CNAME records to VMs rather than actual IPs to reduce management overhead.
  • No monitoring - one would want alerts on certificate replacement.
  • Not use full-scale admin account for synchronising certs.
  • IPs reported in APISIX logs are internal.

Use OCI Network Load Balancer to preserve IP

Add Network Load Balancer instead of Classic, preserving IP.

Change access to 443 and 31640 from 0.0.0.0. This in fact will pose a security risk however public IPs of individual nodes are not visible to outside and it appears there are ways of hiding it.

Create new Network Load Balancer

Add Details

Visibility: Public

IP: Either Ephemeral or Reserved IPv4 (if you got one before)

Select Network and public subnet.

NSG - Disable (for now).

Hit Next button.

Configure listener

Select TCP, port 443 (we want to preserve standard https port yet rely on APISIX serving certificates).

Do not enable Proxy.

Choose backends

Mode Default

Select Add backends Button:

Add Backends

Select Compute instances

Select an instance

Select its internal IP

Select port 31640

Select Add another backend and repeat for remaining 3 instances.

Select Add backends.

Choose backends

(back to the previous menu). Preserve source IP.

Specify health check policy

Protocol: TCP Port: 31640

Rest by default.

Select Manually configure security list rules after the network load balancer is created

Review and create.

Record the IP.

Map the IP using DNS.

Ensure APISIX values.yaml lists externalTrafficPolicy as local.

externalTrafficPolicy: Local

Failed alternative: MetalLB

Configuring MetalLB didn’t seem to be successful as in the end, it would appear that it was necessary to assign actual IP to the target machine. That defeats the purpose of configuring LB inside K8s.

The following steps make it somewhat work - but for some reason, the IP is not reachable.

Book oracle IP.

microk8s enable metallb

When asked for IP, use oracleIP-oracleIP.

Open 7946 UDP and TCP

Fix fw rules:

sudo iptables -I INPUT 5 -s 10.0.0.0/16 -p tcp --dport 7946 -j ACCEPT
sudo iptables -I INPUT 5 -s 10.0.0.0/16 -p udp --dport 7946 -j ACCEPT
sudo iptables -I OUTPUT 5 -d 10.0.0.0/16 -p tcp --sport 7946 -j ACCEPT
sudo iptables -I OUTPUT 5 -d 10.0.0.0/16 -p udp --sport 7946 -j ACCEPT
sudo sysctl -w net.ipv4.conf.all.arp_announce=2

Enable Prometheus stats on APISIX

Modify values.yaml as below:

    serviceMonitor:
      # -- Enable or disable Apache APISIX serviceMonitor
      enabled: true
      # -- namespace where the serviceMonitor is deployed, by default, it is the same as the namespace of the apisix
      namespace: ""
      # -- name of the serviceMonitor, by default, it is the same as the apisix fullname
      name: ""
      # -- interval at which metrics should be scraped
      interval: 15s
      # -- @param serviceMonitor.labels ServiceMonitor extra labels
      labels:
        release: kube-prom-stack

Two changes:

  • enabled to true
  • labels to match what microk8s observability expects.

Update APISIX as here.

Then you’re only left loading the desired grafana board to display collected stats i.e. https://grafana.com/grafana/dashboards/11719-apache-apisix/.

Change APISIX to json logging

Simple:

Change values.yaml:

        accessLogFormat: '{"time":"$time_iso8601","remote_addr":"$remote_addr","remote_user":"$remote_user","host":"$http_host","request":"$request","status":"$status","body_bytes_sent":"$body_bytes_sent","request_time":"$request_time","referer":"$http_referer","user_agent":"$http_user_agent","upstream_addr":"$upstream_addr","upstream_status":"$upstream_status","upstream_response_time":"$upstream_response_time","upstream_uri":"$upstream_scheme://$upstream_host$upstream_uri","message_id":"$sent_http_message_id"}'

        # -- Allows setting json or default characters escaping in variables
        accessLogFormatEscape: json

Update APISIX as here.