UP | HOME

How this blog works
#org#syncthing#kubernetes

Table of Contents

This blog is a creative outlet for me that allows me to refine my understanding of the topics I'm interested in. I don't even have any tracking mechanisms because I judge each article's value by my own set of standards rather than external appreciation.

That being said, I would love to hear how to improve the site. I'm thinking of adding a comment section or an email list option in the near future and I will be sure to update this post when I do.

UPDATE July 30, 2021: I've added tags.

UPDATE Aug 6, 2021: I've added comments.

General process

I write in org mode and export to html. The css is written afterwards instead of modifying the export process. This allows me to use the chrome inspector to quickly optimize an element rather than going through a build-and-test cycle. The element attributes ox adds to each html element are descriptive and unique enough to target quite easily with css selectors.

I also use the org-info-js script to add keybindings, advanced TOC options, and section folding to the website.

Dark mode is enabled using dark-mode-toggle and css media queries.

The built html files are synced to my server using Syncthing and I serve the website using Nginx. All this is done using kubernetes on Digital Ocean. The blog, along with all my org files, is backed up daily using kubernetes VolumeSnapshots.

Folder structure

As I mentioned, the blog is a subset of all my org files which are synced using Syncthing.

           
~/org         My org directory (`my-org-directory')
  /web       The 'source code' for the website
    /emacs      
    /thoughts     My subfolders of different interests
    /web      
    /public     The exported html files
      /js   Javascript files
      /css   CSS Files

Of all of these folders, the only one the public sees is the public folder.

Git vs Syncthing

Git is a great tool for keeping track of plaintext files. It's usefullness with regard to a personal blog is probably overstated. If you find yourself committing messages like 'added some stuff', 'updated blog', or just 'updates', maybe it's time you asked, why bother?

I have a dual boot system as I need to use some Windows applications for work. Whenever switching systems, with git I would need to push and pull all changes each time.

Syncthing is a much more behind-the-scenes approach. It runs at startup and syncs files as they are created. I have it connected to my phone, both operating systems, and a hosted syncthing server in the cloud. If any three of those systems went down, I would still have a full copy of all the files to restore from (it's happened once).

I also take daily, weekly, and monthly snapshots of the syncthing cloud server which can be easily restored from.

One thing I will say in favor of git is that branches are a really nice tool for graduated deployments. You can have a dev branch and a main branch and only do development on the dev branch and merge into main when you feel comfortable. Programming a website 'live' like I'm doing is definitely not recommended for a serious project.

Local development

In each folder in my blog, there is an index.org file which lists the articles in that folder. When working on a new blog post, the last thing I do is add it to the index.org so that people will not see unfinished work. This is not a security measure in that I would never put sensitive information anywhere in the blog folder; even places that are conventionally inaccessible.

The syncing process takes some time, so I like to run a local web server to get instant feedback. The best, simplest web server for local development, hands down, is http-server from npm. I install it globally:

npm install --global http-server

And I start a compilation, giving my public folder:

M-x compile RET http-server ~/org/web/public RET

You can skip the installation step by using npx:

M-x compile RET npx http-server ~/org/web/public RET

Now I can load the website at http://localhost:8080 and instantly review changes. The website is also available on my local network so I can look at it from my phone using my computer's IP address instead of localhost. Most web traffic today is mobile!

Another honorable mention is reload, also from npm, which refreshes the browser for you whenever the files change.

Elisp configuration

First, require the org export package:

(require 'ox-publish)

Appearance

Here are some appearance-related org export configurations:

(setq org-export-with-section-numbers nil
      org-export-with-toc nil)

By default, I turn off section numbers and the table of contents.

Syntax highlighting

Syntax highlighting within src blocks is accomplished using the package htmlize

(use-package htmlize)
(setq org-src-fontify-natively t)

Navigation

(setq org-html-link-home "/"
      org-html-link-up ".")

These options tell the browser to look for an index.html file within the current directory when navigating 'up' and /index.html when navigating 'home'. The forward slash tells the browser to look in the ~/org/web/public folder.

org-info-js

The default org-info-js options look, to me, meant to mirror as closely as possible emacs 'info manuals'. I want something a little more like a classic website so I go with these defaults and configure them a little within each file:

(setq org-html-use-infojs t
      org-html-infojs-options
      '((path . "/js/org-info.js")
        (view . "showall")
        (toc . "0")
        (ftoc . "0")
        (tdepth . "max")
        (sdepth . "max")
        (mouse . "underline")
        (buttons . "nil")
        (ltoc . "0")
        (up . :html-link-up)
        (home . :html-link-home)))

The first option, the path, is quite important. I've downloaded the js file into ~org/web/public/js/org-info.js and the leading slash tells the browser to look at that folder no matter how deep down within subfolders the user currently is.

All content is shown by default with (view . "showall") because not everyone prefers to use keys to navigate a website. This will let any typical web user to just scroll through the entire article without having to navigate or expand subsections.

Publishing options

(setq my-org-directory "/home/tyler/org") ;; an essential variable in my config

(setq org-publish-project-alist
      `(("blog"
         :base-directory ,(concat my-org-directory "web/")
         :base-extension "org"
         :publishing-directory ,(concat my-org-directory "web/public/")
         :publishing-function org-html-publish-to-html
         :recursive t)))

This allows me to publish all articles in the web folder at once with M-x org-publish-project RET blog RET. If I ever change any of the elisp mentioned here, I will have to run this in order to update each web page.

In order to publish a single article, I run C-c C-e h h from within an org file.

Custom css/js/html

I've already mentioned that I prefer to edit the appearance of the website using an external css file. I also added the dark-mode-toggle script as an external file. Here's how that looks in elisp:

(setq org-html-head "
  <link rel=\"stylesheet\" type=\"text/css\" href=\"/css/main.css\"/>
  <link rel=\"stylesheet\"
    type=\"text/css\"
    href=\"/css/dark.css\"
    media=\"(prefers-color-scheme: dark)\">
  <script type=\"module\" src=\"/js/dark-mode-toggle.mjs\"></script>")

This snippet references three files

  • ~/org/web/public/css/main.css
  • ~/org/web/public/css/dark.css
  • ~/org/web/public/js/dark-mode-toggle.mjs

The first two are manually created and the last can be downloaded from unpkg. The main.css holds imports to all other css files and the dark.css is only applied when 'dark mode' is requested by the user (or if their system preferences specify to use dark mode).

Dark mode

Now that I have the dark mode script installed, I will place the actual toggle button in the preamble of the org export.

(setq org-html-preamble-format '(("en" "<dark-mode-toggle
  appearance=\"switch\"
  dark=\"Dark\"
  light=\"Light   \"
  remember=\"Remember\"></dark-mode-toggle>")))

Tags

A wonderful reddit user by the name of IntelligentTea281 asked me how I manage tags on this blog. The answer is I didn't even think about it!

So now I do this:

I have a tags.org file in the ~/org/web directory that looks like this:

#+TITLE: Tags
#+EXPORT_FILE_NAME: public/tags
#+HTML_LINK_UP: /
#+EXCLUDE_TAGS: ignore
#+CALL: createTagFile()
#+INFOJS_OPT: view:info sdepth:1

* Tag file creation                                                  :ignore:
  #+name: createTagFile
  #+BEGIN_SRC emacs-lisp :results value raw
    (defun org-article-get-descriptor-if-filetag (tag file)
      "If org file has a FILETAG matching tag, get the file path and
    title as a list."
      (with-temp-buffer
        (insert-file-contents file)
        (let ((properties (org-export-get-environment)))
          (if (seq-contains-p (plist-get properties :filetags) tag)
              (list (file-relative-name file)
                    (substring-no-properties
                     (car (plist-get properties :title))))))))

    (defun org-articles-matching-filetag (tag files)
      "Get a list of org articles which have a FILETAG matching `tag'"
      (sort 
       (seq-filter
        (lambda (a) (not (null a)))
        (mapcar
         (lambda (file)
           (org-article-get-descriptor-if-filetag tag file))
         files))
       (lambda (a b) (string-lessp (car (cdr a)) (car (cdr b))))))

    (defun org-html-create-tag-listing (tag files)
      "Create an org ast of links to articles which contain a tag"
      (let ((articles (org-articles-matching-filetag tag files)))
        (unless (null articles))
        `((headline (:title "Tags" :level 1))
          (property-drawer () (node-property (:key "CUSTOM_ID" :value ,tag)))
          (headline (:title ,(format "#%s" tag) :level 2))
          ,(mapcar (lambda (article)
                     `(headline (:title ,(apply 'format "[[file:%s][%s]]" article)
                                        :level 3)))
                   articles))))

    (defun org-html-create-tags-listing (dir)
      "Create an org-mode document string for listing tags from all
    files in `dir'"
      (let* ((files (directory-files-recursively
                     dir "^\\([^.#]\\)\.+.org$"))
             (tags (sort (mapcar
                          'car
                          (org-global-tags-completion-table files))
                         (lambda (a b) (string-lessp a b))))
             (ast (mapcar
                   (lambda (tag)
                     (org-html-create-tag-listing tag files))
                   tags)))
        (org-element-interpret-data ast)))


    (org-html-create-tags-listing (concat my-org-directory "web/"))
  #+END_SRC

When exporting this file, the code block will run and insert the results directly into the buffer. Because of the #+EXCLUDE_TAGS property, the actual source block won't be shown, only the results of evaluation.

What `org-export-create-tag-lists' does is look through a folder recursively for org tags. For each tag in that list, it looks again recursively and builds a list of articles that have a #+FILETAGS property that includes that tag. It reads the relative filename and title of the file and formats it as an org link. Also, it creates a custom ID for the heading that can be linked to from each article.

I've made it so each tag is its own section by using the #+INFOJS_OPT: view:info sdepth:1 property. That way, when linking to a specific tag, only articles matching that tag are shown.

There's one more step though, and that is actually showing the #+FILETAGS to the user on each article. That is done with some elisp advice:

(defun org-export-add-filetags-to-subtitle (args) 
  "Include filetags as the subtitle in an org export"
  (let ((info (car (cdr args)))) 
    (plist-put
     info 
     :subtitle `(export-block 
                 (:type "HTML" 
                  :value ,(mapconcat
                           (lambda (tag) 
                             (format
                              "<span class=\"tag\"><span class=\"%s\"><a href=\"/tags#%s\">#%s</a></span></span>"
                              tag tag tag)) 
                           (plist-get info :filetags)
                           ""))))
    args))

(advice-add 'org-html-template 
            :filter-args #'org-export-add-filetags-to-subtitle)

This will add a subtitle on an article with the list of #+FILETAGS during export. Each tag will link to /tags#<tag>. I don't typically use subtitles so this works out well for me.

I also add a link inside each regular tag by overwriting the org-html--tags function. Regular tags are not used to build the tag lists so these must match existing #+FILETAGS.

(defun org-html--tags (tags info) 
  "Format TAGS into HTML.
  INFO is a plist containing export options."
  (when tags
    (format
     "<span class=\"tag\">%s</span>"
     (mapconcat
      (lambda (tag) 
        (format
         "<span class=\"%s\"><a href=\"/tags#%s\">#%s</a></span>"
         (concat (plist-get info :html-tag-class-prefix) 
                 (org-html-fix-class-name tag))
         tag tag))
      tags
      ""))))

Thanks for the thought provoking comment, IntelligentTea281!

Disqus

I use disqus to display a comment box on each article. Disqus automatically stores and moderates the comments for me and allows quick reactions.

First, each article needs a unique ID. I create this by using M-x org-id-get-create and copying the generated id to an #+ID: file property at the top of the file.

In order to use this ID, I have to add it to the format spec when exporting by advising the org-html-format-spec function.

(defun org-html-add-id-to-format-spec (orig info)
  (let (spec
        (id (with-temp-buffer
              (insert-file-contents (plist-get info :input-file))
              (cadr (car (org-collect-keywords '("ID")))))))
    (setq spec (funcall orig info))
    (add-to-list 'spec `(?i . ,id))))

(advice-add 'org-html-format-spec :around #'org-html-add-id-to-format-spec)

This adds %i as a valid replacement string in the preamble and postamble for getting the unique ID of a file.

I added disqus to the postamble using a modified version of the disqus universal script. This checks whether the ID is currently available and loads the comments. You'll need to replace 'https://blog.tygr.info' with the base location of your blog. By hardcoding the origin, local development will not accidentally create a new comment thread. Also, replace 'https://tyler-grinn.discuss.com/embed.js' with the correct url given to you by discus.

(setq org-html-postamble t)
(setq org-html-postamble-format
      '(("en" "
        <div id=\"disqus_thread\"></div>
        <script>
         var disqus_config = function () {
           this.page.url = \"https://blog.tygr.info\" + location.pathname;
           this.page.identifier = \"%i\";
         };

         function loadComments() {
           var d = document, s = d.createElement('script');
           s.src = 'https://tyler-grinn.disqus.com/embed.js';
           s.setAttribute('data-timestamp', +new Date());
           (d.head || d.body).appendChild(s);
         };

         if (\"%i\" !== 'nil') loadComments()
        </script>
        <p class=\"footer\">%a &nbsp; | &nbsp; %C</p>
       ")))

Notice the page identifier is being set to \"%i\". Now every file that has an "ID" property will also have a comments section.

The actual footer is the last line and will look something like this:

Tyler Grinn | 2021-07-20 Tue 08:36

Org files

Each of my org blog files start with four lines:

#+TITLE: How this blog works
#+ID: l529oft056j0
#+EXPORT_FILE_NAME: ../public/emacs/blog
#+DESCRIPTION: All the elisp, js, and css behind the scenes

Depending on how big an article is, I may add some tables of contents:

#+INFOJS_OPT: toc:t tdepth:1 ltoc:above

This line in particular (which applies to this article) gives me multiple tables of contents: one at the top and one in each section that has subheaders. The top table of contents is limited to showing only the top-level headers with tdepth:1.

If I want to add some tags to an article, I use #+FILETAGS

#+FILETAGS: :org:syncthing:kubernetes:

Beyond that, the org file is just an org file. Tables, src blocks, quotes are all exported beautifully without any special consideration.

As I mentioned, each folder has an index.org file which looks something like this:

# -*- org-html-use-infojs: nil; -*-
#+TITLE: Tyler Grinn | Emacs
#+EXPORT_FILE_NAME: ../public/emacs/index
#+HTML_LINK_UP: /

* Emacs articles
** How this blog works
** Custom mode line
   Detailed instructions on how to customize your mode line without
   slowing down emacs
** I have strong opinions on the shape of your cursor

I disable infojs for the index files because having collapsable headings makes it tricky to click if they are also a link.

The HTML_LINK_UP option is necessary because the default I set tells the browser to go to the index.html file, which is already what is being shown. So this option should navigate the user to the index.html file above the current subdirectory. The only thing 'above' the emacs folder is the root directory of the website, so I put / for the 'up' action.

Notice the lack of an ID property means that index pages will not have a comment box.

The links are all relative 'file' links. Org automatically renames these from .org links to .html when exporting.

CSS files

You can find whole books about how to organize css code. Putting it all in one file is certainly a possibility, but I would urge a little separation of concerns. My current philosophy is to split the css into four areas of consideration:

  • Layout
  • Fonts
  • Theme
  • Components

As such, the main.css file is simply

@import 'layout.css';
@import 'fonts.css';
@import 'theme.css';
@import 'components.css';

The first three affect the website globally. The layout is for gross layout considerations, basically how to position the different boxes. Fonts and themes are self-explanatory. The components file is where I modify the default look of individual org elements.

The css is where you can really make your blog unique. Use SVGBackgrounds.com to add a cool pattern, add some fancy fonts, and read up on CSS-Tricks in order to put your own flavor on the blog.

I'm a simple person, and as a simple person my theme.css file is currently sitting empty. Oh well…

You can look at the each of my files in the browser in order to get an idea of how to write your own.

Hosting

If you are not a kubernetes person, I would skip this section and figure out a different hosting solution. I love kubernetes. I have my own personal cluster which hosts all my different side projects.

The value proposition of kubernetes is three-fold:

Infrastructure as code

All of the inner workings of the cluster are saved as yaml files. That means a 'backup' of the cluster is literally kilobytes in size, as opposed to a backup of a standard server which is usually comparable in size to the actual disk.

It also means that I have a high-level overview of exactly what is happening on my system. I never have to remember the install directory for nginx, whether or not I've updated apt recently, or which command I used to set up the firewall. It's all just there, in plain text yaml.

High availability

Kubernetes, at its core, is a container orchestration system, a hypervisor. It will keep track of all of my deployments and make sure they are running smoothly. In theory, I never need to have any downtime because kubernetes performs a little do-si-do when restarting pods. If it detects a memory leak or decides to scale an application automatically, it will create new instances before deleting the old ones, and with a built-in load balancer traffic will seamlessly switch to the new one.

Security

Each container can be sandboxed away from all the other containers. If someone does hack one of my apps, all I have to do is delete the pod and update my passwords. And, if they make it all the way into my management account or somehow corrupt the entire cluster, 'infrastructure as code' has got my back yet again.

This guide is meant for Digital Ocean kubernetes, one of the cheapest providers around today. You can get started with a single-node cluster with an external load balancer and work your way up as you add more deployments. The beautiful thing about 'infrastructure as code' is that the decision of a provider does not need to be permanent. The yaml files are valid on every provider, with a few peculiarities.

Starting point

There are better resources than this blog to set up a kubernetes cluster. This guide assumes you have nginx-ingress and cert-manager installed with a ClusterIssuer named letsencrypt.

Here is a great guide from Digital Ocean itself (Hanif Jetha) in order to get to this point.

Opening port 22000

The trickiest thing about Syncthing on kubernetes is the fact that it uses a non-standard TCP port that is not recognized by nginx-ingress. The solution is to modify the nginx-ingress deployment and service to allow non-standard ports.

First, I downloaded and saved the yaml configuration file used to set up nginx-ingress. The latest url can be found here. I did a wget on that address into a file named ingress.yml in order to get a copy of the nginx-ingress deployment. Make sure the version applied to the cluster is the same as the one downloaded.

The only sections needed from that file are the Service named ingress-nginx-controller and the Deployment named ingress-nginx-controller. If you followed the guide above, you should already have a copy of the Service with the added annotations for proxy protocol and hostname.

The rest of the sections are unnecessary and can be deleted as I am not going to modify them.

For the Service, I needed to add a port for syncthing on 22000. The end result should look something like this:

# ingress.yml
---
# Source: https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/do/deploy.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'
    service.beta.kubernetes.io/do-loadbalancer-hostname: "nginx.tygr.info"
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
    - name: syncthing  # Syncthing port
      port: 22000
      protocol: TCP
      targetPort: 22000
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller

For the deployment, I needed to add an arg that tells nginx to look for a ConfigMap named tcp-services to tell it how to configure non-standard ports.

# ingress.yml cont...
args:
  - /nginx-ingress-controller
  - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
  - --election-id=ingress-controller-leader
  - --ingress-class=nginx
  - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
  - --validating-webhook=:8443
  - --validating-webhook-certificate=/usr/local/certificates/cert
  - --validating-webhook-key=/usr/local/certificates/key
  - --tcp-services-configmap=ingress-nginx/tcp-services # Add tcp-services configmap

This is just the relevant snippet of the whole deployment so you can see where I put the extra arg more clearly.

Finally, I needed to add the tcp-services ConfigMap to the ingress.yml file:

# ingress.yml cont...
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  22000: "default/syncthing:22000:PROXY"

The :PROXY on the end of the config tells kubernetes not to use the proxy-protocol when passing traffic to the syncthing service.

Apply this file with kubectl apply -f ingress.yml

Volume

Kubernetes is stateless. That means if you tried to deploy syncthing without attaching a PersistentVolume, it would only work while the current pod exists. As soon as a kubernetes do-si-do is triggered, all data would be lost.

In order to attach a PersistentVolume to a deployment on Digital Ocean, I need to create a PersistentVolumeClaim:

# syncthing/volume.yml
---
# To restore from a snapshot:
#
# Replace spec.dataSource.name with a valid volumesnapshot name
# Then: kubectl replace --force -f <this-file>
# Finally, delete the pods currently using this volume. The replacement
# pods will wait for the volume to be restored before being created.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: syncthing
spec:
#  dataSource:
#    name: syncthing-2021-01-14-19.50.26
#    kind: VolumeSnapshot
#    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
      storageClassName: do-block-storage

Modify the storage to your liking, I've chosen 10 gigabytes here. I've added some comments to show you how you could restore from a VolumeSnapshot if needed.

Apply this file with kubectl apply -f syncthing/volume.yml

Backups

I've mentioned VolumeSnapshots enough times already in this article, lets get into it.

Here is a typical volume snapshot configuration:

---
# syncthing/volume-snapshot.yml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: new-snapshot
spec:
  volumeSnapshotClassName: csi-hostpath-snapclass
  source:
    persistentVolumeClaimName: syncthing

I can create this resource with kubectl create -f syncthing/volume-snapshot.yml. However, I wrote a very simple tool to create snapshots automatically according to a CronJob. The reason this was necessary is that Digital Ocean limits the amount of snapshots you can have at one time so this tool will automatically clean up old backups as new ones are created.

So I'm not updating code in two places, I'll just let you follow along with the guide I've already written.

Deployment

Finally, I set up the deployment for both the syncthing container and an nginx container named 'blog'.

Blog nginx config

This ConfigMap will be used by the nginx container to serve the blog.

# syncthing/server.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: blog-nginx
    data:
default.conf: |
  access_log /dev/stdout;
  error_log /dev/stdout info;

  server {
    include mime.types;
    types {
      application/javascript mjs;
    }

    listen 80 default_server;
    root /var/www;
    expires off;

    location / {
      index index.html;
      try_files $uri $uri.html $uri/ =404;
    }
  }

Notably here, the access and error logs are redirected to stdout so that I can see them in the kubernetes dashboard.

The mime type for .mjs files needs to be manually specified in order for the dark-mode-toggle script to be loaded correctly.

Also, expires off; will prevent browsers from caching the blog. This is most noticeable with the css files. In my work as a web developer, I will add a file hash to the css and js filenames so that as soon as they are updated, the browser cache is reset. For this tiny blog with usually zero traffic, I don't mind making you download it on every refresh.

Pod configuration

Here is the actual 'Deployment' in kubernetes terms.

# syncthing/server.yml cont...
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: syncthing
  name: syncthing
spec:
  replicas: 1
  selector:
    matchLabels:
      app: syncthing
  template:
    metadata:
      labels:
        app: syncthing
    spec:
      containers:
        - name: syncthing
          image: syncthing/syncthing:latest
          ports:
            - containerPort: 8384
              protocol: TCP
            - containerPort: 22000
              protocol: TCP
          volumeMounts:
            - mountPath: /var/syncthing
              name: syncthing
        - name: blog
          image: nginx:alpine
          ports:
            - containerPort: 80
              protocol: TCP
          volumeMounts:
            - name: syncthing
              mountPath: /var/www
              subPath: org/web/public
              readOnly: true
            - name: nginx-conf
              mountPath: /etc/nginx/conf.d
              readOnly: true
  volumes:
    - name: syncthing
      persistentVolumeClaim:
        claimName: syncthing
    - name: nginx-conf
      configMap:
        name: blog-nginx
        items:
          - key: default.conf
            path: default.conf

This deployment will have two containers: syncthing and blog.

Because Digital Ocean volumes are ReadWriteOnce, the blog container must mount the syncthing volume readOnly. Another requirement for ReadWriteOnce volumes is that the blog and the syncthing containers must be on the same node within the cluster. That is taken care of by putting them in the same pod, but if you need multiple nginx containers you could separate them and use node affinity to ensure they are on the same node.

Also, since the blog container does not need to know about the source files, or any other of my org files, I mount the volume at a subPath of org/web/public

Services

Services allow pods to communicate within the cluster. I need one for the syncthing gui running on port 8384, the syncthing service running on port 22000, and the blog running on port 80.

# syncthing/server.yml cont...
---
apiVersion: v1
kind: Service
metadata:
  name: syncthing-gui
spec:
  ports:
    - port: 80
      targetPort: 8384
  selector:
    app: syncthing
---
apiVersion: v1
kind: Service
metadata:
  name: syncthing
spec:
  ports:
    - port: 22000
  selector:
    app: syncthing
---
apiVersion: v1
kind: Service
metadata:
  name: blog
spec:
  ports:
    - port: 80
  selector:
  app: syncthing

Notice that the syncthing-gui port is renamed from 8384 to 80.

Ingresses

Finally, I need to tell cert-manager and the external load balancer how to direct traffic to both the syncthing gui and the blog service.

First, I added two subdomain records to my domain's DNS configuration:

  • syncthing.tygr.info
  • blog.tygr.info

Obviously, replace 'tygr.info' with your domain name. Both of these should be A records pointing at the external load balancer.

Next, I added the ingresses to kick off the letsencrypt process.

# syncthing/server.yml cont...
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt
  name: syncthing
spec:
  tls:
    - secretName: syncthing.tygr.info
      hosts:
        - syncthing.tygr.info
  rules:
    - host: syncthing.tygr.info
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: syncthing-gui
                port:
                  number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt
  name: blog
spec:
  tls:
    - secretName: blog.tygr.info
      hosts:
        - blog.tygr.info
  rules:
    - host: blog.tygr.info
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: blog
                port:
                  number: 80

You should replace any instances of 'tygr.info' with your own domain name here as well.

Apply all of this with kubectl apply -f syncthing/server.yml

Now, I can hook up the local ~/org folder as a syncthing folder and sync it to the cloud at https://syncthing.tygr.info. For the folder configuration, make sure the location is /var/syncthing/org on the server. After it's finished syncing, you should be able to see the blog at https://blog.tygr.info

I don't need to have these ingresses open all the time for syncthing to work. When I'm not using the gui, I turn it off with kubectl delete ingress syncthing, and turn it on by applying this file again (kubectl apply -f syncthing/server.yml).

Conclusion

Blogging is a new experience for me. I always check articles thrice before publishing and still feel nervous promoting it anywhere. This website is primarily for myself but I'd still love any feedback.