As I was putting this site together, I realized there were some boilerplate workflows I wanted to improve.
The typical workflow for creating a post looked like this:
- Generate a markdown file
- Write content
- Commit to repo
- Build site
- rsync to Google Cloud Storage
My goal was to speed this up a bit, hopefully making room in the future for a more api driven posting approach. That said, I was willing to start small by cutting out a few of the end steps first.
My first instinct was to find some sort of repo-sync tool for Cloud Storage. I would still have to build and commit the compiled site to the repo, but that would at least cut out the rsync part.
Unfortunately, according to the almighty Stack Overflow, there was no dice to be had on that front. You’d still need a go-between server to do the rsync’ing somehow.
… But after thinking about it a bit, I realized Google Cloud Container Builder could serve exactly that function. It spins up, runs commands, and then spins down. All controlled through a single config file. Limited hassle, limited fuss.
I dove through the docs and it seemed straightforward enough. The source code gets auto-mounted to /workspace (which is the working directory as well). Each command is a docker container. And despite the name, you don’t actually have to build a container at the end of it, so rsync’ing to a bucket is fair game.
A little more searching led to a tutorial that was just what I wanted… almost. That article just shows the rsync’ing part… but if Container Builder can already run commands, why should I rebuild the site on my machine before committing it?
All said, the new game plan looked like the following:
- Generate a markdown file
- Write content
- Commit to repo
- [Let Cloud Container Builder handle the rest]
As demonstrated in the docs, Google provides a few cloud-builders out of the box, including gsutil… which is a big relief since I didn’t want to mess with authentication. However, there’s no pre-built binary for Hugo. To Docker Hub!
Sadly, there appears to be no official container for the Hugo project, and no one had an up to date container specifically for building the site (several wanted to build and serve). As a result I modified one of the Dockerfiles I found there, ending up with the following:
FROM debian:jessie
MAINTAINER [email protected]
# Install pygments (for syntax highlighting)
RUN apt-get -qq update \
&& DEBIAN_FRONTEND=noninteractive apt-get -qq install -y --no-install-recommends python-pygments git ca-certificates asciidoc \
&& rm -rf /var/lib/apt/lists/*
# Download and install hugo
ENV HUGO_VERSION 0.31
ENV HUGO_BINARY hugo_${HUGO_VERSION}_Linux-64bit.deb
ADD https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/${HUGO_BINARY} /tmp/hugo.deb
RUN dpkg -i /tmp/hugo.deb \
&& rm /tmp/hugo.deb
It seemed a little too hacky to post back to docker hub, so instead I just used the internal docker registry in my Cloud Project.
This led to a pretty straightforward cloudbuild.yaml file:
steps:
- name: us.gcr.io/ftwynn-hugo-blog/hugo-builder:v1
args: ["/usr/local/bin/hugo"]
- name: gcr.io/cloud-builders/gsutil
args: ["-m", "rsync", "-r", "-c", "-d", "./public", "gs://www.ftwynn.com"]
And it worked! Huzzah! All was well… until I tried to switch themes. Then I got a weird error message in the logs:
ERROR 2017/11/24 20:23:18 Error while rendering "home": template: index.html:1:3: executing "index.html" at <partial "header" .>: error calling partial: template: partials/header.html:4:3: executing "partials/header.html" at <partial "head.html" ...>: error calling partial: Partial "head.html" not found
On a closer look, I realized the new theme was being stored as a git binary in the repo, transparent where it needed to be on my machine but somehow not accessible on the build machine.
Adding a git command to re-pull the theme in the builder seemed to fix it, leading to a final cloudbuil.yaml of:
steps:
- name: gcr.io/cloud-builders/git
args: ['clone', 'https://github.com/mtn/cocoa-eh-hugo-theme.git', 'themes/cocoa-eh']
- name: us.gcr.io/ftwynn-hugo-blog/hugo-builder:v1
args: ["/usr/local/bin/hugo"]
- name: gcr.io/cloud-builders/gsutil
args: ["-m", "rsync", "-r", "-c", "-d", "./public", "gs://www.ftwynn.com"]
Aside from adding a build trigger on every push to master, that’s all there was to it. I was pretty surprised how easy it was all told, and there’s a pretty generous allotment on the free tier for Cloud Builder, so I’m looking forward to tweaking it more in the future.
Update
I wasn’t satisfied with the extra git pull in the cloudbuild.yaml, so in the course of other investigations I took a deeper look, which led to this hugo issue.
Turns out there are three ways to pull a theme into your site:
- git clone
- git submodule
- git subtree
After significant experimentation (and digging, because that 3rd option isn’t easy to find), I found that subtrees are the right answer from a Cloud Builder perspective. All the articles about using subtrees are overkill (and tend to presume you want to commit to both repos), but a decent one is here. I didn’t see a need to add the repo as a remote, just the subtree add command was fine for me.
So in the end, the cloudbuild.yaml is back to:
steps:
- name: us.gcr.io/ftwynn-hugo-blog/hugo-builder:v1
args: ["/usr/local/bin/hugo"]
- name: gcr.io/cloud-builders/gsutil
args: ["-m", "rsync", "-r", "-c", "-d", "./public", "gs://www.ftwynn.com"]